2026-03-09T14:24:15.584 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T14:24:15.589 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T14:24:15.628 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/507 branch: squid description: orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} email: null first_in_suite: false flavor: default job_id: '507' last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_STRAY_DAEMON - CEPHADM_FAILED_DAEMON - CEPHADM_AGENT_DOWN log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm07.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEFpSqMWpIXptr832PTQFtk8inS6xhb0eHek7nbQdI45ybnOgJYurYWCvkmJLtS51jw7+Vmsbxd2ZFkbb3A9+XY= vm11.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD2A9Tk06foUKQLztiP+R77PLvkzfpB0NRPh8msGhVEv3QfxT++gbR+Q8j+P0LbFK+GC8L2JmbGHHhT49lzuHO8= tasks: - cephadm: cephadm_branch: v17.2.0 cephadm_git_url: https://github.com/ceph/ceph image: quay.io/ceph/ceph:v17.2.0 - cephadm.shell: mon.a: - ceph config set mgr mgr/cephadm/use_repo_digest false --force - cephadm.shell: env: - sha1 mon.a: - radosgw-admin realm create --rgw-realm=r --default - radosgw-admin zonegroup create --rgw-zonegroup=default --master --default - radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=z --master --default - radosgw-admin period update --rgw-realm=r --commit - ceph orch apply rgw foo --realm r --zone z --placement=2 --port=8000 - ceph orch apply rgw smpl - ceph osd pool create foo - rbd pool init foo - ceph orch apply iscsi foo u p - sleep 120 - ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force - ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force - ceph config set global log_to_journald false --force - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 - cephadm.shell: env: - sha1 mon.a: - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done - ceph orch ps - ceph versions - echo "wait for servicemap items w/ changing names to refresh" - sleep 60 - ceph orch ps - ceph versions - ceph orch upgrade status - ceph health detail - ceph versions | jq -e '.overall | length == 1' - ceph versions | jq -e '.overall | keys' | grep $sha1 - ceph orch ls | grep '^osd ' - cephadm.shell: mon.a: - ceph orch upgrade ls - ceph orch upgrade ls --image quay.io/ceph/ceph --show-all-versions | grep 16.2.0 - ceph orch upgrade ls --image quay.io/ceph/ceph --tags | grep v16.2.2 teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T14:24:15.628 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T14:24:15.629 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T14:24:15.629 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T14:24:15.629 INFO:teuthology.task.internal:Checking packages... 2026-03-09T14:24:15.629 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T14:24:15.629 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T14:24:15.629 INFO:teuthology.packaging:ref: None 2026-03-09T14:24:15.629 INFO:teuthology.packaging:tag: None 2026-03-09T14:24:15.629 INFO:teuthology.packaging:branch: squid 2026-03-09T14:24:15.629 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:24:15.630 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-09T14:24:16.248 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-09T14:24:16.249 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T14:24:16.249 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T14:24:16.249 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T14:24:16.257 INFO:teuthology.task.internal:Saving configuration 2026-03-09T14:24:16.262 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T14:24:16.263 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T14:24:16.270 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm07.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/507', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 14:23:00.542373', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:07', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEFpSqMWpIXptr832PTQFtk8inS6xhb0eHek7nbQdI45ybnOgJYurYWCvkmJLtS51jw7+Vmsbxd2ZFkbb3A9+XY='} 2026-03-09T14:24:16.275 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm11.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/507', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 14:23:00.543482', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:0b', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD2A9Tk06foUKQLztiP+R77PLvkzfpB0NRPh8msGhVEv3QfxT++gbR+Q8j+P0LbFK+GC8L2JmbGHHhT49lzuHO8='} 2026-03-09T14:24:16.275 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T14:24:16.276 INFO:teuthology.task.internal:roles: ubuntu@vm07.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'node-exporter.a', 'alertmanager.a'] 2026-03-09T14:24:16.276 INFO:teuthology.task.internal:roles: ubuntu@vm11.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b'] 2026-03-09T14:24:16.276 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T14:24:16.283 DEBUG:teuthology.task.console_log:vm07 does not support IPMI; excluding 2026-03-09T14:24:16.290 DEBUG:teuthology.task.console_log:vm11 does not support IPMI; excluding 2026-03-09T14:24:16.290 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fea14386170>, signals=[15]) 2026-03-09T14:24:16.290 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T14:24:16.291 INFO:teuthology.task.internal:Opening connections... 2026-03-09T14:24:16.291 DEBUG:teuthology.task.internal:connecting to ubuntu@vm07.local 2026-03-09T14:24:16.292 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:24:16.351 DEBUG:teuthology.task.internal:connecting to ubuntu@vm11.local 2026-03-09T14:24:16.351 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm11.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:24:16.408 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T14:24:16.410 DEBUG:teuthology.orchestra.run.vm07:> uname -m 2026-03-09T14:24:16.428 INFO:teuthology.orchestra.run.vm07.stdout:x86_64 2026-03-09T14:24:16.428 DEBUG:teuthology.orchestra.run.vm07:> cat /etc/os-release 2026-03-09T14:24:16.473 INFO:teuthology.orchestra.run.vm07.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:NAME="Ubuntu" 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_ID="22.04" 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_CODENAME=jammy 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:ID=ubuntu 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:ID_LIKE=debian 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T14:24:16.474 INFO:teuthology.orchestra.run.vm07.stdout:UBUNTU_CODENAME=jammy 2026-03-09T14:24:16.474 INFO:teuthology.lock.ops:Updating vm07.local on lock server 2026-03-09T14:24:16.480 DEBUG:teuthology.orchestra.run.vm11:> uname -m 2026-03-09T14:24:16.496 INFO:teuthology.orchestra.run.vm11.stdout:x86_64 2026-03-09T14:24:16.496 DEBUG:teuthology.orchestra.run.vm11:> cat /etc/os-release 2026-03-09T14:24:16.540 INFO:teuthology.orchestra.run.vm11.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:NAME="Ubuntu" 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:VERSION_ID="22.04" 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:VERSION_CODENAME=jammy 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:ID=ubuntu 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:ID_LIKE=debian 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T14:24:16.541 INFO:teuthology.orchestra.run.vm11.stdout:UBUNTU_CODENAME=jammy 2026-03-09T14:24:16.541 INFO:teuthology.lock.ops:Updating vm11.local on lock server 2026-03-09T14:24:16.546 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T14:24:16.548 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T14:24:16.549 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T14:24:16.549 DEBUG:teuthology.orchestra.run.vm07:> test '!' -e /home/ubuntu/cephtest 2026-03-09T14:24:16.551 DEBUG:teuthology.orchestra.run.vm11:> test '!' -e /home/ubuntu/cephtest 2026-03-09T14:24:16.584 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T14:24:16.585 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T14:24:16.585 DEBUG:teuthology.orchestra.run.vm07:> test -z $(ls -A /var/lib/ceph) 2026-03-09T14:24:16.595 DEBUG:teuthology.orchestra.run.vm11:> test -z $(ls -A /var/lib/ceph) 2026-03-09T14:24:16.597 INFO:teuthology.orchestra.run.vm07.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T14:24:16.629 INFO:teuthology.orchestra.run.vm11.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T14:24:16.630 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T14:24:16.641 DEBUG:teuthology.orchestra.run.vm07:> test -e /ceph-qa-ready 2026-03-09T14:24:16.643 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:24:16.886 DEBUG:teuthology.orchestra.run.vm11:> test -e /ceph-qa-ready 2026-03-09T14:24:16.889 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:24:17.213 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T14:24:17.215 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T14:24:17.215 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T14:24:17.216 DEBUG:teuthology.orchestra.run.vm11:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T14:24:17.219 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T14:24:17.221 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T14:24:17.223 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T14:24:17.223 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T14:24:17.263 DEBUG:teuthology.orchestra.run.vm11:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T14:24:17.271 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T14:24:17.272 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T14:24:17.272 DEBUG:teuthology.orchestra.run.vm07:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T14:24:17.309 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:24:17.309 DEBUG:teuthology.orchestra.run.vm11:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T14:24:17.312 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:24:17.312 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T14:24:17.351 DEBUG:teuthology.orchestra.run.vm11:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T14:24:17.359 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:24:17.361 INFO:teuthology.orchestra.run.vm11.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:24:17.364 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:24:17.367 INFO:teuthology.orchestra.run.vm11.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T14:24:17.368 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T14:24:17.371 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T14:24:17.371 DEBUG:teuthology.orchestra.run.vm07:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T14:24:17.407 DEBUG:teuthology.orchestra.run.vm11:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T14:24:17.418 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T14:24:17.420 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T14:24:17.420 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T14:24:17.455 DEBUG:teuthology.orchestra.run.vm11:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T14:24:17.464 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:24:17.502 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:24:17.550 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T14:24:17.550 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T14:24:17.603 DEBUG:teuthology.orchestra.run.vm11:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:24:17.607 DEBUG:teuthology.orchestra.run.vm11:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:24:17.652 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-09T14:24:17.652 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T14:24:17.703 DEBUG:teuthology.orchestra.run.vm07:> sudo service rsyslog restart 2026-03-09T14:24:17.704 DEBUG:teuthology.orchestra.run.vm11:> sudo service rsyslog restart 2026-03-09T14:24:17.761 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T14:24:17.780 INFO:teuthology.task.internal:Starting timer... 2026-03-09T14:24:17.780 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T14:24:17.783 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T14:24:17.791 INFO:teuthology.task.selinux:Excluding vm07: VMs are not yet supported 2026-03-09T14:24:17.791 INFO:teuthology.task.selinux:Excluding vm11: VMs are not yet supported 2026-03-09T14:24:17.791 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T14:24:17.792 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T14:24:17.792 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T14:24:17.792 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T14:24:17.799 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T14:24:17.799 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T14:24:17.807 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T14:24:18.540 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T14:24:18.546 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T14:24:18.547 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryit0lz3qy --limit vm07.local,vm11.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T14:26:18.152 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm07.local'), Remote(name='ubuntu@vm11.local')] 2026-03-09T14:26:18.153 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm07.local' 2026-03-09T14:26:18.153 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:26:18.215 DEBUG:teuthology.orchestra.run.vm07:> true 2026-03-09T14:26:18.424 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm07.local' 2026-03-09T14:26:18.425 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm11.local' 2026-03-09T14:26:18.425 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm11.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T14:26:18.486 DEBUG:teuthology.orchestra.run.vm11:> true 2026-03-09T14:26:18.709 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm11.local' 2026-03-09T14:26:18.709 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T14:26:18.712 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T14:26:18.712 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T14:26:18.712 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:26:18.713 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T14:26:18.713 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: Command line: ntpd -gq 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: ---------------------------------------------------- 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: corporation. Support and training for ntp-4 are 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: available at https://www.nwtime.org/support 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: ---------------------------------------------------- 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: proto: precision = 0.029 usec (-25) 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: basedate set to 2022-02-04 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: gps base set to 2022-02-06 (week 2196) 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T14:26:18.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T14:26:18.734 INFO:teuthology.orchestra.run.vm07.stderr: 9 Mar 14:26:18 ntpd[16099]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T14:26:18.734 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T14:26:18.734 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T14:26:18.734 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T14:26:18.734 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: Listen normally on 3 ens3 192.168.123.107:123 2026-03-09T14:26:18.734 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: Listen normally on 4 lo [::1]:123 2026-03-09T14:26:18.734 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:7%2]:123 2026-03-09T14:26:18.734 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:18 ntpd[16099]: Listening on routing socket on fd #22 for interface updates 2026-03-09T14:26:18.768 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: Command line: ntpd -gq 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: ---------------------------------------------------- 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: corporation. Support and training for ntp-4 are 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: available at https://www.nwtime.org/support 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: ---------------------------------------------------- 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: proto: precision = 0.030 usec (-25) 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: basedate set to 2022-02-04 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: gps base set to 2022-02-06 (week 2196) 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T14:26:18.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T14:26:18.770 INFO:teuthology.orchestra.run.vm11.stderr: 9 Mar 14:26:18 ntpd[16107]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T14:26:18.770 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T14:26:18.770 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T14:26:18.771 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T14:26:18.771 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: Listen normally on 3 ens3 192.168.123.111:123 2026-03-09T14:26:18.771 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: Listen normally on 4 lo [::1]:123 2026-03-09T14:26:18.771 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:b%2]:123 2026-03-09T14:26:18.771 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:18 ntpd[16107]: Listening on routing socket on fd #22 for interface updates 2026-03-09T14:26:19.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:19 ntpd[16099]: Soliciting pool server 139.144.71.56 2026-03-09T14:26:19.770 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:19 ntpd[16107]: Soliciting pool server 139.144.71.56 2026-03-09T14:26:20.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:20 ntpd[16099]: Soliciting pool server 77.90.40.94 2026-03-09T14:26:20.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:20 ntpd[16099]: Soliciting pool server 31.209.85.242 2026-03-09T14:26:20.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:20 ntpd[16107]: Soliciting pool server 77.90.40.94 2026-03-09T14:26:20.769 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:20 ntpd[16107]: Soliciting pool server 31.209.85.242 2026-03-09T14:26:21.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:21 ntpd[16099]: Soliciting pool server 129.70.132.33 2026-03-09T14:26:21.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:21 ntpd[16099]: Soliciting pool server 129.70.132.35 2026-03-09T14:26:21.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:21 ntpd[16099]: Soliciting pool server 78.47.56.71 2026-03-09T14:26:21.768 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:21 ntpd[16107]: Soliciting pool server 129.70.132.33 2026-03-09T14:26:21.768 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:21 ntpd[16107]: Soliciting pool server 129.70.132.35 2026-03-09T14:26:21.768 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:21 ntpd[16107]: Soliciting pool server 78.47.56.71 2026-03-09T14:26:22.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:22 ntpd[16099]: Soliciting pool server 141.144.241.16 2026-03-09T14:26:22.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:22 ntpd[16099]: Soliciting pool server 141.144.246.224 2026-03-09T14:26:22.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:22 ntpd[16099]: Soliciting pool server 77.42.16.222 2026-03-09T14:26:22.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:22 ntpd[16099]: Soliciting pool server 81.169.217.236 2026-03-09T14:26:22.767 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:22 ntpd[16107]: Soliciting pool server 141.144.241.16 2026-03-09T14:26:22.768 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:22 ntpd[16107]: Soliciting pool server 141.144.246.224 2026-03-09T14:26:22.768 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:22 ntpd[16107]: Soliciting pool server 77.42.16.222 2026-03-09T14:26:22.768 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:22 ntpd[16107]: Soliciting pool server 81.169.217.236 2026-03-09T14:26:23.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:23 ntpd[16099]: Soliciting pool server 93.177.65.20 2026-03-09T14:26:23.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:23 ntpd[16099]: Soliciting pool server 131.188.3.222 2026-03-09T14:26:23.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:23 ntpd[16099]: Soliciting pool server 168.119.211.223 2026-03-09T14:26:23.733 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:23 ntpd[16099]: Soliciting pool server 185.125.190.58 2026-03-09T14:26:23.767 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:23 ntpd[16107]: Soliciting pool server 93.177.65.20 2026-03-09T14:26:23.767 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:23 ntpd[16107]: Soliciting pool server 131.188.3.222 2026-03-09T14:26:23.767 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:23 ntpd[16107]: Soliciting pool server 168.119.211.223 2026-03-09T14:26:23.767 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:23 ntpd[16107]: Soliciting pool server 185.125.190.58 2026-03-09T14:26:24.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:24 ntpd[16099]: Soliciting pool server 185.125.190.57 2026-03-09T14:26:24.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:24 ntpd[16099]: Soliciting pool server 213.206.165.21 2026-03-09T14:26:24.732 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:24 ntpd[16099]: Soliciting pool server 185.216.176.59 2026-03-09T14:26:24.766 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:24 ntpd[16107]: Soliciting pool server 185.125.190.57 2026-03-09T14:26:24.767 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:24 ntpd[16107]: Soliciting pool server 213.206.165.21 2026-03-09T14:26:24.767 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:24 ntpd[16107]: Soliciting pool server 185.216.176.59 2026-03-09T14:26:27.763 INFO:teuthology.orchestra.run.vm07.stdout: 9 Mar 14:26:27 ntpd[16099]: ntpd: time slew +0.008155 s 2026-03-09T14:26:27.763 INFO:teuthology.orchestra.run.vm07.stdout:ntpd: time slew +0.008155s 2026-03-09T14:26:27.786 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T14:26:27.786 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-09T14:26:27.786 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:26:27.786 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:26:27.786 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:26:27.786 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:26:27.786 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:26:27.796 INFO:teuthology.orchestra.run.vm11.stdout: 9 Mar 14:26:27 ntpd[16107]: ntpd: time slew +0.019949 s 2026-03-09T14:26:27.796 INFO:teuthology.orchestra.run.vm11.stdout:ntpd: time slew +0.019949s 2026-03-09T14:26:27.816 INFO:teuthology.orchestra.run.vm11.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T14:26:27.816 INFO:teuthology.orchestra.run.vm11.stdout:============================================================================== 2026-03-09T14:26:27.816 INFO:teuthology.orchestra.run.vm11.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:26:27.816 INFO:teuthology.orchestra.run.vm11.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:26:27.816 INFO:teuthology.orchestra.run.vm11.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:26:27.816 INFO:teuthology.orchestra.run.vm11.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:26:27.816 INFO:teuthology.orchestra.run.vm11.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:26:27.816 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T14:26:27.864 INFO:tasks.cephadm:Config: {'cephadm_branch': 'v17.2.0', 'cephadm_git_url': 'https://github.com/ceph/ceph', 'image': 'quay.io/ceph/ceph:v17.2.0', 'conf': {'global': {'mon election default strategy': 3}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_STRAY_DAEMON', 'CEPHADM_FAILED_DAEMON', 'CEPHADM_AGENT_DOWN'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T14:26:27.864 INFO:tasks.cephadm:Cluster image is quay.io/ceph/ceph:v17.2.0 2026-03-09T14:26:27.864 INFO:tasks.cephadm:Cluster fsid is f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:26:27.864 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T14:26:27.864 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.107', 'mon.c': '[v2:192.168.123.107:3301,v1:192.168.123.107:6790]', 'mon.b': '192.168.123.111'} 2026-03-09T14:26:27.864 INFO:tasks.cephadm:First mon is mon.a on vm07 2026-03-09T14:26:27.864 INFO:tasks.cephadm:First mgr is y 2026-03-09T14:26:27.864 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T14:26:27.864 DEBUG:teuthology.orchestra.run.vm07:> sudo hostname $(hostname -s) 2026-03-09T14:26:27.874 DEBUG:teuthology.orchestra.run.vm11:> sudo hostname $(hostname -s) 2026-03-09T14:26:27.882 INFO:tasks.cephadm:Downloading cephadm (repo https://github.com/ceph/ceph ref v17.2.0)... 2026-03-09T14:26:27.882 DEBUG:teuthology.orchestra.run.vm07:> curl --silent https://raw.githubusercontent.com/ceph/ceph/v17.2.0/src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T14:26:28.146 INFO:teuthology.orchestra.run.vm07.stdout:-rw-rw-r-- 1 ubuntu ubuntu 320521 Mar 9 14:26 /home/ubuntu/cephtest/cephadm 2026-03-09T14:26:28.146 DEBUG:teuthology.orchestra.run.vm11:> curl --silent https://raw.githubusercontent.com/ceph/ceph/v17.2.0/src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T14:26:28.226 INFO:teuthology.orchestra.run.vm11.stdout:-rw-rw-r-- 1 ubuntu ubuntu 320521 Mar 9 14:26 /home/ubuntu/cephtest/cephadm 2026-03-09T14:26:28.226 DEBUG:teuthology.orchestra.run.vm07:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T14:26:28.230 DEBUG:teuthology.orchestra.run.vm11:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T14:26:28.238 INFO:tasks.cephadm:Pulling image quay.io/ceph/ceph:v17.2.0 on all hosts... 2026-03-09T14:26:28.238 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 pull 2026-03-09T14:26:28.274 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 pull 2026-03-09T14:26:28.347 INFO:teuthology.orchestra.run.vm07.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-09T14:26:28.352 INFO:teuthology.orchestra.run.vm11.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-09T14:29:16.601 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:29:16.602 INFO:teuthology.orchestra.run.vm07.stdout: "ceph_version": "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)", 2026-03-09T14:29:16.602 INFO:teuthology.orchestra.run.vm07.stdout: "image_id": "e1d6a67b021eb077ee22bf650f1a9fb1980a2cf5c36bdb9cba9eac6de8f702d9", 2026-03-09T14:29:16.602 INFO:teuthology.orchestra.run.vm07.stdout: "repo_digests": [ 2026-03-09T14:29:16.602 INFO:teuthology.orchestra.run.vm07.stdout: "quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a" 2026-03-09T14:29:16.602 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-09T14:29:16.602 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:29:16.647 INFO:teuthology.orchestra.run.vm11.stdout:{ 2026-03-09T14:29:16.648 INFO:teuthology.orchestra.run.vm11.stdout: "ceph_version": "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)", 2026-03-09T14:29:16.648 INFO:teuthology.orchestra.run.vm11.stdout: "image_id": "e1d6a67b021eb077ee22bf650f1a9fb1980a2cf5c36bdb9cba9eac6de8f702d9", 2026-03-09T14:29:16.648 INFO:teuthology.orchestra.run.vm11.stdout: "repo_digests": [ 2026-03-09T14:29:16.648 INFO:teuthology.orchestra.run.vm11.stdout: "quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a" 2026-03-09T14:29:16.648 INFO:teuthology.orchestra.run.vm11.stdout: ] 2026-03-09T14:29:16.648 INFO:teuthology.orchestra.run.vm11.stdout:} 2026-03-09T14:29:16.656 DEBUG:teuthology.orchestra.run.vm07:> sudo mkdir -p /etc/ceph 2026-03-09T14:29:16.663 DEBUG:teuthology.orchestra.run.vm11:> sudo mkdir -p /etc/ceph 2026-03-09T14:29:16.670 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 777 /etc/ceph 2026-03-09T14:29:16.711 DEBUG:teuthology.orchestra.run.vm11:> sudo chmod 777 /etc/ceph 2026-03-09T14:29:16.717 INFO:tasks.cephadm:Writing seed config... 2026-03-09T14:29:16.718 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-09T14:29:16.718 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T14:29:16.718 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T14:29:16.718 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-09T14:29:16.718 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T14:29:16.718 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T14:29:16.718 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T14:29:16.718 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T14:29:16.718 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T14:29:16.718 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T14:29:16.718 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T14:29:16.718 DEBUG:teuthology.orchestra.run.vm07:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T14:29:16.755 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = f59f9828-1bc3-11f1-bfd8-7b3d0c866040 mon election default strategy = 3 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-09T14:29:16.756 DEBUG:teuthology.orchestra.run.vm07:mon.a> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.a.service 2026-03-09T14:29:16.798 DEBUG:teuthology.orchestra.run.vm07:mgr.y> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.y.service 2026-03-09T14:29:16.842 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T14:29:16.842 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 -v bootstrap --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.107 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:16.955 INFO:teuthology.orchestra.run.vm07.stderr:-------------------------------------------------------------------------------- 2026-03-09T14:29:16.955 INFO:teuthology.orchestra.run.vm07.stderr:cephadm ['--image', 'quay.io/ceph/ceph:v17.2.0', '-v', 'bootstrap', '--fsid', 'f59f9828-1bc3-11f1-bfd8-7b3d0c866040', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.107', '--skip-admin-label'] 2026-03-09T14:29:16.955 INFO:teuthology.orchestra.run.vm07.stderr:Verifying podman|docker is present... 2026-03-09T14:29:16.955 INFO:teuthology.orchestra.run.vm07.stderr:Verifying lvm2 is present... 2026-03-09T14:29:16.955 INFO:teuthology.orchestra.run.vm07.stderr:Verifying time synchronization is in place... 2026-03-09T14:29:16.957 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T14:29:16.960 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: inactive 2026-03-09T14:29:16.961 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T14:29:16.964 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: inactive 2026-03-09T14:29:16.966 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: masked 2026-03-09T14:29:16.968 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: inactive 2026-03-09T14:29:16.969 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T14:29:16.972 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: inactive 2026-03-09T14:29:16.974 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: enabled 2026-03-09T14:29:16.976 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: active 2026-03-09T14:29:16.976 INFO:teuthology.orchestra.run.vm07.stderr:Unit ntp.service is enabled and running 2026-03-09T14:29:16.976 INFO:teuthology.orchestra.run.vm07.stderr:Repeating the final host check... 2026-03-09T14:29:16.976 INFO:teuthology.orchestra.run.vm07.stderr:docker (/usr/bin/docker) is present 2026-03-09T14:29:16.976 INFO:teuthology.orchestra.run.vm07.stderr:systemctl is present 2026-03-09T14:29:16.976 INFO:teuthology.orchestra.run.vm07.stderr:lvcreate is present 2026-03-09T14:29:16.978 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T14:29:16.980 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: inactive 2026-03-09T14:29:16.982 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T14:29:16.984 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: inactive 2026-03-09T14:29:16.986 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: masked 2026-03-09T14:29:16.988 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: inactive 2026-03-09T14:29:16.989 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T14:29:16.991 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: inactive 2026-03-09T14:29:16.994 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: enabled 2026-03-09T14:29:16.995 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: active 2026-03-09T14:29:16.996 INFO:teuthology.orchestra.run.vm07.stderr:Unit ntp.service is enabled and running 2026-03-09T14:29:16.996 INFO:teuthology.orchestra.run.vm07.stderr:Host looks OK 2026-03-09T14:29:16.996 INFO:teuthology.orchestra.run.vm07.stderr:Cluster fsid: f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:16.996 INFO:teuthology.orchestra.run.vm07.stderr:Acquiring lock 140681599017952 on /run/cephadm/f59f9828-1bc3-11f1-bfd8-7b3d0c866040.lock 2026-03-09T14:29:16.996 INFO:teuthology.orchestra.run.vm07.stderr:Lock 140681599017952 acquired on /run/cephadm/f59f9828-1bc3-11f1-bfd8-7b3d0c866040.lock 2026-03-09T14:29:16.996 INFO:teuthology.orchestra.run.vm07.stderr:Verifying IP 192.168.123.107 port 3300 ... 2026-03-09T14:29:16.996 INFO:teuthology.orchestra.run.vm07.stderr:Verifying IP 192.168.123.107 port 6789 ... 2026-03-09T14:29:16.996 INFO:teuthology.orchestra.run.vm07.stderr:Base mon IP is 192.168.123.107, final addrv is [v2:192.168.123.107:3300,v1:192.168.123.107:6789] 2026-03-09T14:29:16.997 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.107 metric 100 2026-03-09T14:29:16.997 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-09T14:29:16.997 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.107 metric 100 2026-03-09T14:29:16.997 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.107 metric 100 2026-03-09T14:29:16.998 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T14:29:16.998 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-09T14:29:16.999 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T14:29:16.999 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: inet6 ::1/128 scope host 2026-03-09T14:29:16.999 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: valid_lft forever preferred_lft forever 2026-03-09T14:29:16.999 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: 2: ens3: mtu 1500 state UP qlen 1000 2026-03-09T14:29:16.999 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: inet6 fe80::5055:ff:fe00:7/64 scope link 2026-03-09T14:29:16.999 INFO:teuthology.orchestra.run.vm07.stderr:/usr/sbin/ip: valid_lft forever preferred_lft forever 2026-03-09T14:29:17.000 INFO:teuthology.orchestra.run.vm07.stderr:Mon IP `192.168.123.107` is in CIDR network `192.168.123.0/24` 2026-03-09T14:29:17.000 INFO:teuthology.orchestra.run.vm07.stderr:- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T14:29:17.000 INFO:teuthology.orchestra.run.vm07.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-09T14:29:18.176 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/docker: v17.2.0: Pulling from ceph/ceph 2026-03-09T14:29:18.179 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/docker: Digest: sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a 2026-03-09T14:29:18.179 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/docker: Status: Image is up to date for quay.io/ceph/ceph:v17.2.0 2026-03-09T14:29:18.180 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/docker: quay.io/ceph/ceph:v17.2.0 2026-03-09T14:29:18.287 INFO:teuthology.orchestra.run.vm07.stderr:ceph: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable) 2026-03-09T14:29:18.317 INFO:teuthology.orchestra.run.vm07.stderr:Ceph version: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable) 2026-03-09T14:29:18.317 INFO:teuthology.orchestra.run.vm07.stderr:Extracting ceph user uid/gid from container image... 2026-03-09T14:29:18.377 INFO:teuthology.orchestra.run.vm07.stderr:stat: 167 167 2026-03-09T14:29:18.400 INFO:teuthology.orchestra.run.vm07.stderr:Creating initial keys... 2026-03-09T14:29:18.466 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-authtool: AQA+2a5pqJaIGxAAxPJZTCG5PURzsaCkOiukvA== 2026-03-09T14:29:18.571 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-authtool: AQA+2a5pmOzLIRAAEfofWPNWvVgwwxO3JJzgCg== 2026-03-09T14:29:18.658 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-authtool: AQA+2a5pz2r0JhAAZCO86bV0neo3NBSoGqn/gQ== 2026-03-09T14:29:18.680 INFO:teuthology.orchestra.run.vm07.stderr:Creating initial monmap... 2026-03-09T14:29:18.747 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T14:29:18.747 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/monmaptool: setting min_mon_release = octopus 2026-03-09T14:29:18.747 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: set fsid to f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:18.747 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T14:29:18.771 INFO:teuthology.orchestra.run.vm07.stderr:monmaptool for a [v2:192.168.123.107:3300,v1:192.168.123.107:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T14:29:18.771 INFO:teuthology.orchestra.run.vm07.stderr:setting min_mon_release = octopus 2026-03-09T14:29:18.771 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/monmaptool: set fsid to f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:18.771 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T14:29:18.771 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T14:29:18.771 INFO:teuthology.orchestra.run.vm07.stderr:Creating mon... 2026-03-09T14:29:18.852 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.844+0000 7f21aaedc880 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T14:29:18.852 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.844+0000 7f21aaedc880 1 imported monmap: 2026-03-09T14:29:18.852 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: epoch 0 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: last_changed 2026-03-09T14:29:18.743288+0000 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: min_mon_release 15 (octopus) 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: election_strategy: 1 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.844+0000 7f21aaedc880 0 /usr/bin/ceph-mon: set fsid to f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: RocksDB version: 6.15.5 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.853 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Git sha rocksdb_build_git_sha:@0@ 2026-03-09T14:29:18.854 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Compile date Apr 18 2022 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: DB SUMMARY 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: DB Session ID: Z6OA386TGJTVPCAT7RV2 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.error_if_exists: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.create_if_missing: 1 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.env: 0x55c7602c6860 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.fs: Posix File System 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.info_log: 0x55c773511320 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.statistics: (nil) 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.use_fsync: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.db_log_dir: 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-a/store.db 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T14:29:18.856 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.write_buffer_manager: 0x55c7737b1950 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.unordered_write: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.row_cache: None 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.wal_filter: None 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.preserve_deletes: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.two_write_queues: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.atomic_flush: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_open_files: -1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Compression algorithms supported: 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: kZSTD supported: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: kXpressCompression supported: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: kZlibCompression supported: 1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T14:29:18.857 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T14:29:18.858 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.848+0000 7f21aaedc880 4 rocksdb: [db/db_impl/db_impl_open.cc:281] Creating manifest 1 2026-03-09T14:29:18.858 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.858 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: [db/version_set.cc:4725] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-09T14:29:18.858 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: [db/column_family.cc:597] --------------- Options for column family [default]: 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.merge_operator: 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_filter: None 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7734dad10) 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: cache_index_and_filter_blocks: 1 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: pin_top_level_index_and_filter: 1 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: index_type: 0 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: data_block_index_type: 0 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: index_shortening: 1 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: data_block_hash_table_util_ratio: 0.750000 2026-03-09T14:29:18.859 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: hash_index_allow_collision: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: checksum: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: no_block_cache: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: block_cache: 0x55c773542170 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: block_cache_name: BinnedLRUCache 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: block_cache_options: 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: capacity : 536870912 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: num_shard_bits : 4 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: strict_capacity_limit : 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: high_pri_pool_ratio: 0.000 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: block_cache_compressed: (nil) 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: persistent_cache: (nil) 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: block_size: 4096 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: block_size_deviation: 10 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: block_restart_interval: 16 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: index_block_restart_interval: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: metadata_block_size: 4096 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: partition_filters: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: use_delta_encoding: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: filter_policy: rocksdb.BuiltinBloomFilter 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: whole_key_filtering: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: verify_compression: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: read_amp_bytes_per_bit: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: format_version: 4 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: enable_index_compression: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: block_align: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compression: NoCompression 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.num_levels: 7 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T14:29:18.860 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.arena_block_size: 4194304 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T14:29:18.861 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.table_properties_collectors: 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.bloom_locality: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.ttl: 2592000 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.enable_blob_files: false 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.min_blob_size: 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: [db/version_set.cc:4773] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: [db/version_set.cc:4782] Column family [default] (ID 0), log number is 0 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.852+0000 7f21aaedc880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 3 2026-03-09T14:29:18.862 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.863 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.856+0000 7f21aaedc880 4 rocksdb: [db/db_impl/db_impl_open.cc:1701] SstFileManager instance 0x55c773528700 2026-03-09T14:29:18.863 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.856+0000 7f21aaedc880 4 rocksdb: DB pointer 0x55c77359c000 2026-03-09T14:29:18.864 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.856+0000 7f219cac6700 4 rocksdb: [db/db_impl/db_impl.cc:902] ------- DUMPING STATS ------- 2026-03-09T14:29:18.864 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.856+0000 7f219cac6700 4 rocksdb: [db/db_impl/db_impl.cc:903] 2026-03-09T14:29:18.865 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ** DB Stats ** 2026-03-09T14:29:18.865 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:29:18.865 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T14:29:18.865 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:29:18.865 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:29:18.865 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T14:29:18.865 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: AddFile(Keys): cumulative 0, interval 0 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ** File Read Latency Histogram By Level [default] ** 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: AddFile(Keys): cumulative 0, interval 0 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: ** File Read Latency Histogram By Level [default] ** 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.860+0000 7f21aaedc880 4 rocksdb: [db/db_impl/db_impl.cc:447] Shutdown: canceling all background work 2026-03-09T14:29:18.866 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.860+0000 7f21aaedc880 4 rocksdb: [db/db_impl/db_impl.cc:625] Shutdown complete 2026-03-09T14:29:18.867 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph-mon: debug 2026-03-09T14:29:18.860+0000 7f21aaedc880 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-09T14:29:18.895 INFO:teuthology.orchestra.run.vm07.stderr:create mon.a on 2026-03-09T14:29:19.053 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T14:29:19.205 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040.target → /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040.target. 2026-03-09T14:29:19.206 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Created symlink /etc/systemd/system/ceph.target.wants/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040.target → /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040.target. 2026-03-09T14:29:19.533 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Failed to reset failed state of unit ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.a.service: Unit ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.a.service not loaded. 2026-03-09T14:29:19.535 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Created symlink /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040.target.wants/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.a.service → /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service. 2026-03-09T14:29:19.702 INFO:teuthology.orchestra.run.vm07.stderr:firewalld does not appear to be present 2026-03-09T14:29:19.702 INFO:teuthology.orchestra.run.vm07.stderr:Not possible to enable service . firewalld.service is not available 2026-03-09T14:29:19.702 INFO:teuthology.orchestra.run.vm07.stderr:Waiting for mon to start... 2026-03-09T14:29:19.702 INFO:teuthology.orchestra.run.vm07.stderr:Waiting for mon... 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: cluster: 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: id: f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: health: HEALTH_OK 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: services: 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mon: 1 daemons, quorum a (age 0.0519601s) 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mgr: no daemons active 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: osd: 0 osds: 0 up, 0 in 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: data: 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: pools: 0 pools, 0 pgs 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: objects: 0 objects, 0 B 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: usage: 0 B used, 0 B / 0 B avail 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: pgs: 2026-03-09T14:29:19.901 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:19.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:19 vm07 bash[17036]: cluster 2026-03-09T14:29:19.838134+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T14:29:19.932 INFO:teuthology.orchestra.run.vm07.stderr:mon is available 2026-03-09T14:29:19.932 INFO:teuthology.orchestra.run.vm07.stderr:Assimilating anything we can from ceph.conf... 2026-03-09T14:29:20.090 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:20.090 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: [global] 2026-03-09T14:29:20.090 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: fsid = f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:20.090 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mon_host = [v2:192.168.123.107:3300,v1:192.168.123.107:6789] 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mon_osd_allow_pg_remap = true 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mon_osd_allow_primary_affinity = true 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mon_warn_on_no_sortbitwise = false 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: osd_crush_chooseleaf_type = 0 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: [mgr] 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mgr/cephadm/use_agent = False 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mgr/telemetry/nag = false 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: [osd] 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: osd_map_max_advance = 10 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: osd_mclock_iops_capacity_threshold_hdd = 49000 2026-03-09T14:29:20.091 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: osd_sloppy_crc = true 2026-03-09T14:29:20.127 INFO:teuthology.orchestra.run.vm07.stderr:Generating new minimal ceph.conf... 2026-03-09T14:29:20.329 INFO:teuthology.orchestra.run.vm07.stderr:Restarting the monitor... 2026-03-09T14:29:20.419 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 systemd[1]: Stopping Ceph mon.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:29:20.419 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17406]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mon.a 2026-03-09T14:29:20.419 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17036]: debug 2026-03-09T14:29:20.344+0000 7f8321659700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T14:29:20.419 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17036]: debug 2026-03-09T14:29:20.344+0000 7f8321659700 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T14:29:20.419 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17413]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mon-a 2026-03-09T14:29:20.424 INFO:teuthology.orchestra.run.vm07.stderr:Setting mon public_network to 192.168.123.0/24 2026-03-09T14:29:20.675 INFO:teuthology.orchestra.run.vm07.stderr:Wrote config to /etc/ceph/ceph.conf 2026-03-09T14:29:20.675 INFO:teuthology.orchestra.run.vm07.stderr:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:20.675 INFO:teuthology.orchestra.run.vm07.stderr:Creating mgr... 2026-03-09T14:29:20.675 INFO:teuthology.orchestra.run.vm07.stderr:Verifying port 9283 ... 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17447]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mon.a 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.a.service: Deactivated successfully. 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 systemd[1]: Stopped Ceph mon.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 systemd[1]: Started Ceph mon.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.516+0000 7fb1979b5880 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.516+0000 7fb1979b5880 0 ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable), process ceph-mon, pid 7 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.516+0000 7fb1979b5880 0 pidfile_write: ignore empty --pid-file 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 0 load: jerasure load: lrc 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: RocksDB version: 6.15.5 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Git sha rocksdb_build_git_sha:@0@ 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Compile date Apr 18 2022 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: DB SUMMARY 2026-03-09T14:29:20.683 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: DB Session ID: QY9CYVOZ4VUZQDO21H1V 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: CURRENT file: CURRENT 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 131 Bytes 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000010.log size: 73715 ; 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.error_if_exists: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.create_if_missing: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.env: 0x558d846c6860 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.fs: Posix File System 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.info_log: 0x558d8645be00 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.statistics: (nil) 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.use_fsync: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.db_log_dir: 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-a/store.db 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.write_buffer_manager: 0x558d8654a240 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.unordered_write: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.row_cache: None 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.wal_filter: None 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.preserve_deletes: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.two_write_queues: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.atomic_flush: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T14:29:20.684 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_open_files: -1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Compression algorithms supported: 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: kZSTD supported: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: kXpressCompression supported: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: kZlibCompression supported: 1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: [db/version_set.cc:4725] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000009 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: [db/column_family.cc:597] --------------- Options for column family [default]: 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.merge_operator: 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_filter: None 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d86429d00) 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: cache_index_and_filter_blocks: 1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: pin_top_level_index_and_filter: 1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: index_type: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: data_block_index_type: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: index_shortening: 1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: hash_index_allow_collision: 1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: checksum: 1 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: no_block_cache: 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: block_cache: 0x558d86490170 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: block_cache_name: BinnedLRUCache 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: block_cache_options: 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: capacity : 536870912 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: num_shard_bits : 4 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: strict_capacity_limit : 0 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: high_pri_pool_ratio: 0.000 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: block_cache_compressed: (nil) 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: persistent_cache: (nil) 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: block_size: 4096 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: block_size_deviation: 10 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: block_restart_interval: 16 2026-03-09T14:29:20.685 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: index_block_restart_interval: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: metadata_block_size: 4096 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: partition_filters: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: use_delta_encoding: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: filter_policy: rocksdb.BuiltinBloomFilter 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: whole_key_filtering: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: verify_compression: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: read_amp_bytes_per_bit: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: format_version: 4 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: enable_index_compression: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: block_align: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compression: NoCompression 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.num_levels: 7 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.arena_block_size: 4194304 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T14:29:20.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.table_properties_collectors: 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.bloom_locality: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.ttl: 2592000 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.enable_blob_files: false 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.min_blob_size: 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: [db/version_set.cc:4773] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 11, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: [db/version_set.cc:4782] Column family [default] (ID 0), log number is 5 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.520+0000 7fb1979b5880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 13 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.532+0000 7fb1979b5880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773066560535230, "job": 1, "event": "recovery_started", "wal_files": [10]} 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.532+0000 7fb1979b5880 4 rocksdb: [db/db_impl/db_impl_open.cc:847] Recovering log #10 mode 2 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.532+0000 7fb1979b5880 3 rocksdb: [table/block_based/filter_policy.cc:996] Using legacy Bloom filter with high (20) bits/key. Dramatic filter space and/or accuracy improvement is available with format_version>=5. 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.532+0000 7fb1979b5880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773066560536750, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 14, "file_size": 70687, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 69004, "index_size": 176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 9687, "raw_average_key_size": 49, "raw_value_size": 63573, "raw_average_value_size": 324, "num_data_blocks": 8, "num_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1773066560, "oldest_key_time": 0, "file_creation_time": 0, "db_id": "a15f1eb3-64c5-40cb-954e-7e6d47d8bfb6", "db_session_id": "QY9CYVOZ4VUZQDO21H1V"}} 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.532+0000 7fb1979b5880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 15 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.536+0000 7fb1979b5880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773066560538264, "job": 1, "event": "recovery_finished"} 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.536+0000 7fb1979b5880 4 rocksdb: [file/delete_scheduler.cc:73] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000010.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.536+0000 7fb1979b5880 4 rocksdb: [db/db_impl/db_impl_open.cc:1701] SstFileManager instance 0x558d86476700 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.536+0000 7fb1979b5880 4 rocksdb: DB pointer 0x558d864ea000 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.536+0000 7fb18773d700 4 rocksdb: [db/db_impl/db_impl.cc:902] ------- DUMPING STATS ------- 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.536+0000 7fb18773d700 4 rocksdb: [db/db_impl/db_impl.cc:903] 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ** DB Stats ** 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ** Compaction Stats [default] ** 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: L0 2/0 70.79 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.0 0.00 0.00 1 0.001 0 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Sum 2/0 70.79 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.0 0.00 0.00 1 0.001 0 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.0 0.00 0.00 1 0.001 0 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ** Compaction Stats [default] ** 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 49.0 0.00 0.00 1 0.001 0 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Cumulative compaction: 0.00 GB write, 4.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Interval compaction: 0.00 GB write, 4.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ** Compaction Stats [default] ** 2026-03-09T14:29:20.687 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: L0 2/0 70.79 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.0 0.00 0.00 1 0.001 0 0 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Sum 2/0 70.79 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.0 0.00 0.00 1 0.001 0 0 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ** Compaction Stats [default] ** 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 49.0 0.00 0.00 1 0.001 0 0 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Cumulative compaction: 0.00 GB write, 4.27 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.540+0000 7fb1979b5880 0 starting mon.a rank 0 at public addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] at bind addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.540+0000 7fb1979b5880 1 mon.a@-1(???) e1 preinit fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.540+0000 7fb1979b5880 0 mon.a@-1(???).mds e1 new map 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.540+0000 7fb1979b5880 0 mon.a@-1(???).mds e1 print_map 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: e1 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: legacy client fscid: -1 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: No filesystems configured 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.540+0000 7fb1979b5880 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.540+0000 7fb1979b5880 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.540+0000 7fb1979b5880 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.540+0000 7fb1979b5880 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: debug 2026-03-09T14:29:20.540+0000 7fb1979b5880 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: cluster 2026-03-09T14:29:20.547394+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: cluster 2026-03-09T14:29:20.547422+0000 mon.a (mon.0) 2 : cluster [DBG] monmap e1: 1 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0]} 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: cluster 2026-03-09T14:29:20.549806+0000 mon.a (mon.0) 3 : cluster [DBG] fsmap 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: cluster 2026-03-09T14:29:20.549891+0000 mon.a (mon.0) 4 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T14:29:20.688 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 bash[17480]: cluster 2026-03-09T14:29:20.550582+0000 mon.a (mon.0) 5 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T14:29:20.863 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Failed to reset failed state of unit ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.y.service: Unit ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.y.service not loaded. 2026-03-09T14:29:20.865 INFO:teuthology.orchestra.run.vm07.stderr:systemctl: Created symlink /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040.target.wants/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.y.service → /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service. 2026-03-09T14:29:20.981 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:21.029 INFO:teuthology.orchestra.run.vm07.stderr:firewalld does not appear to be present 2026-03-09T14:29:21.029 INFO:teuthology.orchestra.run.vm07.stderr:Not possible to enable service . firewalld.service is not available 2026-03-09T14:29:21.029 INFO:teuthology.orchestra.run.vm07.stderr:firewalld does not appear to be present 2026-03-09T14:29:21.029 INFO:teuthology.orchestra.run.vm07.stderr:Not possible to open ports <[9283]>. firewalld.service is not available 2026-03-09T14:29:21.029 INFO:teuthology.orchestra.run.vm07.stderr:Waiting for mgr to start... 2026-03-09T14:29:21.029 INFO:teuthology.orchestra.run.vm07.stderr:Waiting for mgr... 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: { 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "fsid": "f59f9828-1bc3-11f1-bfd8-7b3d0c866040", 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "health": { 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "checks": {}, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "mutes": [] 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum": [ 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 0 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "a" 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum_age": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "monmap": { 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osdmap": { 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "pgmap": { 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "fsmap": { 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "available": false, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "modules": [ 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "iostat", 2026-03-09T14:29:21.270 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "nfs", 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "restful" 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "services": {} 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "servicemap": { 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "modified": "2026-03-09T14:29:19.844812+0000", 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "services": {} 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-09T14:29:21.271 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: } 2026-03-09T14:29:21.284 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:20 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:21.324 INFO:teuthology.orchestra.run.vm07.stderr:mgr not available, waiting (1/15)... 2026-03-09T14:29:21.624 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:21 vm07 bash[17785]: debug 2026-03-09T14:29:21.352+0000 7f49daa19000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:29:21.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:21 vm07 bash[17480]: audit 2026-03-09T14:29:20.618359+0000 mon.a (mon.0) 6 : audit [INF] from='client.? 192.168.123.107:0/2635445731' entity='client.admin' 2026-03-09T14:29:21.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:21 vm07 bash[17480]: audit 2026-03-09T14:29:21.261980+0000 mon.a (mon.0) 7 : audit [DBG] from='client.? 192.168.123.107:0/1860983945' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:29:21.918 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:21 vm07 bash[17785]: debug 2026-03-09T14:29:21.620+0000 7f49daa19000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:29:22.398 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:22 vm07 bash[17785]: debug 2026-03-09T14:29:22.056+0000 7f49daa19000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:29:22.398 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:22 vm07 bash[17785]: debug 2026-03-09T14:29:22.136+0000 7f49daa19000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:29:22.398 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:22 vm07 bash[17785]: debug 2026-03-09T14:29:22.300+0000 7f49daa19000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:29:22.398 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:22 vm07 bash[17785]: debug 2026-03-09T14:29:22.392+0000 7f49daa19000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:29:22.667 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:22 vm07 bash[17785]: debug 2026-03-09T14:29:22.436+0000 7f49daa19000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:29:22.667 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:22 vm07 bash[17785]: debug 2026-03-09T14:29:22.552+0000 7f49daa19000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:29:22.667 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:22 vm07 bash[17785]: debug 2026-03-09T14:29:22.604+0000 7f49daa19000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:29:23.117 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:22 vm07 bash[17785]: debug 2026-03-09T14:29:22.664+0000 7f49daa19000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:29:23.373 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:23 vm07 bash[17785]: debug 2026-03-09T14:29:23.108+0000 7f49daa19000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:29:23.373 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:23 vm07 bash[17785]: debug 2026-03-09T14:29:23.156+0000 7f49daa19000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:29:23.373 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:23 vm07 bash[17785]: debug 2026-03-09T14:29:23.204+0000 7f49daa19000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: { 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "fsid": "f59f9828-1bc3-11f1-bfd8-7b3d0c866040", 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "health": { 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "checks": {}, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "mutes": [] 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum": [ 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 0 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "a" 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum_age": 3, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "monmap": { 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osdmap": { 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "pgmap": { 2026-03-09T14:29:23.557 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "fsmap": { 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "available": false, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "modules": [ 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "iostat", 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "nfs", 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "restful" 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "services": {} 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "servicemap": { 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "modified": "2026-03-09T14:29:19.844812+0000", 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "services": {} 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-09T14:29:23.558 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: } 2026-03-09T14:29:23.604 INFO:teuthology.orchestra.run.vm07.stderr:mgr not available, waiting (2/15)... 2026-03-09T14:29:23.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:23 vm07 bash[17480]: audit 2026-03-09T14:29:23.551800+0000 mon.a (mon.0) 8 : audit [DBG] from='client.? 192.168.123.107:0/554337988' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:29:23.663 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:23 vm07 bash[17785]: debug 2026-03-09T14:29:23.520+0000 7f49daa19000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:29:23.663 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:23 vm07 bash[17785]: debug 2026-03-09T14:29:23.604+0000 7f49daa19000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:29:23.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:23 vm07 bash[17785]: debug 2026-03-09T14:29:23.656+0000 7f49daa19000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:29:23.918 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:23 vm07 bash[17785]: debug 2026-03-09T14:29:23.728+0000 7f49daa19000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:29:24.306 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:24 vm07 bash[17785]: debug 2026-03-09T14:29:24.008+0000 7f49daa19000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:29:24.306 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:24 vm07 bash[17785]: debug 2026-03-09T14:29:24.176+0000 7f49daa19000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:29:24.306 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:24 vm07 bash[17785]: debug 2026-03-09T14:29:24.232+0000 7f49daa19000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:29:24.667 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:24 vm07 bash[17785]: debug 2026-03-09T14:29:24.300+0000 7f49daa19000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:29:24.667 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:24 vm07 bash[17785]: debug 2026-03-09T14:29:24.444+0000 7f49daa19000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:29:25.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: cluster 2026-03-09T14:29:24.929831+0000 mon.a (mon.0) 9 : cluster [INF] Activating manager daemon y 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: cluster 2026-03-09T14:29:24.934100+0000 mon.a (mon.0) 10 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00435277s) 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: audit 2026-03-09T14:29:24.936280+0000 mon.a (mon.0) 11 : audit [DBG] from='mgr.14102 192.168.123.107:0/3977817101' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: audit 2026-03-09T14:29:24.936589+0000 mon.a (mon.0) 12 : audit [DBG] from='mgr.14102 192.168.123.107:0/3977817101' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: audit 2026-03-09T14:29:24.936880+0000 mon.a (mon.0) 13 : audit [DBG] from='mgr.14102 192.168.123.107:0/3977817101' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: audit 2026-03-09T14:29:24.937230+0000 mon.a (mon.0) 14 : audit [DBG] from='mgr.14102 192.168.123.107:0/3977817101' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: audit 2026-03-09T14:29:24.937717+0000 mon.a (mon.0) 15 : audit [DBG] from='mgr.14102 192.168.123.107:0/3977817101' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: cluster 2026-03-09T14:29:24.944871+0000 mon.a (mon.0) 16 : cluster [INF] Manager daemon y is now available 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: audit 2026-03-09T14:29:24.954137+0000 mon.a (mon.0) 17 : audit [INF] from='mgr.14102 192.168.123.107:0/3977817101' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: audit 2026-03-09T14:29:24.955019+0000 mon.a (mon.0) 18 : audit [INF] from='mgr.14102 192.168.123.107:0/3977817101' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: audit 2026-03-09T14:29:24.961055+0000 mon.a (mon.0) 19 : audit [INF] from='mgr.14102 192.168.123.107:0/3977817101' entity='mgr.y' 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: audit 2026-03-09T14:29:24.963189+0000 mon.a (mon.0) 20 : audit [INF] from='mgr.14102 192.168.123.107:0/3977817101' entity='mgr.y' 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:24 vm07 bash[17480]: audit 2026-03-09T14:29:24.965538+0000 mon.a (mon.0) 21 : audit [INF] from='mgr.14102 192.168.123.107:0/3977817101' entity='mgr.y' 2026-03-09T14:29:25.418 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:24 vm07 bash[17785]: debug 2026-03-09T14:29:24.924+0000 7f49daa19000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: { 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "fsid": "f59f9828-1bc3-11f1-bfd8-7b3d0c866040", 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "health": { 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "checks": {}, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "mutes": [] 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum": [ 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 0 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "a" 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum_age": 5, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "monmap": { 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osdmap": { 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "pgmap": { 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-09T14:29:25.809 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "fsmap": { 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "available": false, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "modules": [ 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "iostat", 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "nfs", 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "restful" 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "services": {} 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "servicemap": { 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "modified": "2026-03-09T14:29:19.844812+0000", 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "services": {} 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-09T14:29:25.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: } 2026-03-09T14:29:25.841 INFO:teuthology.orchestra.run.vm07.stderr:mgr not available, waiting (3/15)... 2026-03-09T14:29:26.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:26 vm07 bash[17480]: audit 2026-03-09T14:29:25.803425+0000 mon.a (mon.0) 22 : audit [DBG] from='client.? 192.168.123.107:0/3835540673' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:29:26.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:26 vm07 bash[17480]: cluster 2026-03-09T14:29:25.938642+0000 mon.a (mon.0) 23 : cluster [DBG] mgrmap e3: y(active, since 1.0089s) 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: { 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "fsid": "f59f9828-1bc3-11f1-bfd8-7b3d0c866040", 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "health": { 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "checks": {}, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "mutes": [] 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum": [ 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 0 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "a" 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "quorum_age": 7, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "monmap": { 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osdmap": { 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:28.095 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "pgmap": { 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "fsmap": { 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "available": true, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "modules": [ 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "iostat", 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "nfs", 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "restful" 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ], 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "services": {} 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "servicemap": { 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "modified": "2026-03-09T14:29:19.844812+0000", 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "services": {} 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: }, 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-09T14:29:28.096 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: } 2026-03-09T14:29:28.129 INFO:teuthology.orchestra.run.vm07.stderr:mgr is available 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: [global] 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: fsid = f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mon_osd_allow_pg_remap = true 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mon_osd_allow_primary_affinity = true 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mon_warn_on_no_sortbitwise = false 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: osd_crush_chooseleaf_type = 0 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: [mgr] 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: mgr/telemetry/nag = false 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 2026-03-09T14:29:28.346 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: [osd] 2026-03-09T14:29:28.347 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: osd_map_max_advance = 10 2026-03-09T14:29:28.347 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: osd_mclock_iops_capacity_threshold_hdd = 49000 2026-03-09T14:29:28.347 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: osd_sloppy_crc = true 2026-03-09T14:29:28.396 INFO:teuthology.orchestra.run.vm07.stderr:Enabling cephadm module... 2026-03-09T14:29:28.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:28 vm07 bash[17480]: cluster 2026-03-09T14:29:27.553465+0000 mon.a (mon.0) 24 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-09T14:29:28.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:28 vm07 bash[17480]: audit 2026-03-09T14:29:28.089527+0000 mon.a (mon.0) 25 : audit [DBG] from='client.? 192.168.123.107:0/2004749321' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T14:29:28.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:28 vm07 bash[17480]: audit 2026-03-09T14:29:28.336310+0000 mon.a (mon.0) 26 : audit [INF] from='client.? 192.168.123.107:0/336531685' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T14:29:28.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:28 vm07 bash[17480]: audit 2026-03-09T14:29:28.339190+0000 mon.a (mon.0) 27 : audit [INF] from='client.? 192.168.123.107:0/336531685' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T14:29:29.874 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: { 2026-03-09T14:29:29.874 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 5, 2026-03-09T14:29:29.874 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "available": true, 2026-03-09T14:29:29.874 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "active_name": "y", 2026-03-09T14:29:29.874 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_standby": 0 2026-03-09T14:29:29.874 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: } 2026-03-09T14:29:29.884 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:29 vm07 bash[17480]: audit 2026-03-09T14:29:28.620679+0000 mon.a (mon.0) 28 : audit [INF] from='client.? 192.168.123.107:0/324421273' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T14:29:29.884 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:29 vm07 bash[17785]: ignoring --setuser ceph since I am not root 2026-03-09T14:29:29.884 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:29 vm07 bash[17785]: ignoring --setgroup ceph since I am not root 2026-03-09T14:29:29.884 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:29 vm07 bash[17785]: debug 2026-03-09T14:29:29.688+0000 7f798a385000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:29:29.884 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:29 vm07 bash[17785]: debug 2026-03-09T14:29:29.732+0000 7f798a385000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:29:29.920 INFO:teuthology.orchestra.run.vm07.stderr:Waiting for the mgr to restart... 2026-03-09T14:29:29.920 INFO:teuthology.orchestra.run.vm07.stderr:Waiting for mgr epoch 5... 2026-03-09T14:29:30.167 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:30 vm07 bash[17785]: debug 2026-03-09T14:29:30.052+0000 7f798a385000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:29:30.759 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:30 vm07 bash[17480]: audit 2026-03-09T14:29:29.559776+0000 mon.a (mon.0) 29 : audit [INF] from='client.? 192.168.123.107:0/324421273' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T14:29:30.759 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:30 vm07 bash[17480]: cluster 2026-03-09T14:29:29.559802+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e5: y(active, since 4s) 2026-03-09T14:29:30.759 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:30 vm07 bash[17480]: audit 2026-03-09T14:29:29.868080+0000 mon.a (mon.0) 31 : audit [DBG] from='client.? 192.168.123.107:0/3880359995' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:29:30.759 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:30 vm07 bash[17785]: debug 2026-03-09T14:29:30.492+0000 7f798a385000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:29:30.759 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:30 vm07 bash[17785]: debug 2026-03-09T14:29:30.576+0000 7f798a385000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:29:31.025 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:30 vm07 bash[17785]: debug 2026-03-09T14:29:30.752+0000 7f798a385000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:29:31.025 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:30 vm07 bash[17785]: debug 2026-03-09T14:29:30.844+0000 7f798a385000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:29:31.025 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:30 vm07 bash[17785]: debug 2026-03-09T14:29:30.892+0000 7f798a385000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:29:31.417 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:31 vm07 bash[17785]: debug 2026-03-09T14:29:31.016+0000 7f798a385000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:29:31.417 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:31 vm07 bash[17785]: debug 2026-03-09T14:29:31.072+0000 7f798a385000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:29:31.418 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:31 vm07 bash[17785]: debug 2026-03-09T14:29:31.136+0000 7f798a385000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:29:31.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:31 vm07 bash[17785]: debug 2026-03-09T14:29:31.580+0000 7f798a385000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:29:31.918 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:31 vm07 bash[17785]: debug 2026-03-09T14:29:31.628+0000 7f798a385000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:29:31.918 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:31 vm07 bash[17785]: debug 2026-03-09T14:29:31.676+0000 7f798a385000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:29:32.396 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:31 vm07 bash[17785]: debug 2026-03-09T14:29:31.944+0000 7f798a385000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:29:32.396 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:32 vm07 bash[17785]: debug 2026-03-09T14:29:31.996+0000 7f798a385000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:29:32.396 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:32 vm07 bash[17785]: debug 2026-03-09T14:29:32.044+0000 7f798a385000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:29:32.396 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:32 vm07 bash[17785]: debug 2026-03-09T14:29:32.116+0000 7f798a385000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:29:32.647 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:32 vm07 bash[17785]: debug 2026-03-09T14:29:32.388+0000 7f798a385000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:29:32.647 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:32 vm07 bash[17785]: debug 2026-03-09T14:29:32.540+0000 7f798a385000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:29:32.647 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:32 vm07 bash[17785]: debug 2026-03-09T14:29:32.588+0000 7f798a385000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:29:32.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:32 vm07 bash[17785]: debug 2026-03-09T14:29:32.640+0000 7f798a385000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:29:32.918 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:32 vm07 bash[17785]: debug 2026-03-09T14:29:32.760+0000 7f798a385000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:29:33.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:33 vm07 bash[17480]: cluster 2026-03-09T14:29:33.188145+0000 mon.a (mon.0) 32 : cluster [INF] Active manager daemon y restarted 2026-03-09T14:29:33.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:33 vm07 bash[17480]: cluster 2026-03-09T14:29:33.188970+0000 mon.a (mon.0) 33 : cluster [INF] Activating manager daemon y 2026-03-09T14:29:33.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:33 vm07 bash[17480]: cluster 2026-03-09T14:29:33.191238+0000 mon.a (mon.0) 34 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T14:29:33.667 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:33 vm07 bash[17785]: debug 2026-03-09T14:29:33.184+0000 7f798a385000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:29:34.167 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:33 vm07 bash[17785]: [09/Mar/2026:14:29:33] ENGINE Bus STARTING 2026-03-09T14:29:34.167 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:33 vm07 bash[17785]: [09/Mar/2026:14:29:33] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:29:34.167 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:33 vm07 bash[17785]: [09/Mar/2026:14:29:33] ENGINE Bus STARTED 2026-03-09T14:29:34.260 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: { 2026-03-09T14:29:34.260 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "mgrmap_epoch": 6, 2026-03-09T14:29:34.260 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "initialized": true 2026-03-09T14:29:34.260 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: } 2026-03-09T14:29:34.293 INFO:teuthology.orchestra.run.vm07.stderr:mgr epoch 5 is available 2026-03-09T14:29:34.293 INFO:teuthology.orchestra.run.vm07.stderr:Setting orchestrator backend to cephadm... 2026-03-09T14:29:34.538 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: cluster 2026-03-09T14:29:33.241765+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0528771s) 2026-03-09T14:29:34.538 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.244907+0000 mon.a (mon.0) 36 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:29:34.538 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.245009+0000 mon.a (mon.0) 37 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.245449+0000 mon.a (mon.0) 38 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.245610+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.245787+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: cluster 2026-03-09T14:29:33.252537+0000 mon.a (mon.0) 41 : cluster [INF] Manager daemon y is now available 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.259897+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.262263+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.270245+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.270963+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.271429+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.272165+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.280307+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.799530+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:34.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:34 vm07 bash[17480]: audit 2026-03-09T14:29:33.837456+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:34.777 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: value unchanged 2026-03-09T14:29:34.820 INFO:teuthology.orchestra.run.vm07.stderr:Generating ssh key... 2026-03-09T14:29:35.303 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGYnnFFMouXWBxQFy+cGYrRVF9vWt4s0N0URN8b23t4EHODSeM6sS/7tjJswwuNm2sov25xo/smfZ39la3ki8wCPCmB5hPvLs+ZPRdV4++asTFNkIldWob3paueHpVdY5w1EGwxxM7MzWMlar1eAVHRtxC6kxpja6bnmyqWVJzx8KvOv2yzAbdn8StSa3beeEHReRFq69zw2qtxlU6a8PL0pyy5h/5GkTrk5XUW9FSsGLdYBRUzA+DafY8DOV2OCvghnvFypmclsWY1hJFNmjfdlhsTHzrYvKAto4uT3LnB4d8gRkFCPqNAKfAH41ccnFcvU4NrqphXvgdYQMXP9Pvf73hJFblOgkx415cC9jo/Kq4Zrq/b6B5GjRk32XeMrXVwujnZ0D8prgn2OAxFYAU9VdOsqDyMAhA5CM/JdHJw/hoFZP+oyWmXvOkbLr5U1dvibvhVM72HqDk4CDpnkWUpIi+bZvKmLPYIIHzPC3udFf9yyP/tQ6w0PpiTz64olU= ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: cephadm 2026-03-09T14:29:33.686818+0000 mgr.y (mgr.14120) 1 : cephadm [INF] [09/Mar/2026:14:29:33] ENGINE Bus STARTING 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: cephadm 2026-03-09T14:29:33.795788+0000 mgr.y (mgr.14120) 2 : cephadm [INF] [09/Mar/2026:14:29:33] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: cephadm 2026-03-09T14:29:33.795893+0000 mgr.y (mgr.14120) 3 : cephadm [INF] [09/Mar/2026:14:29:33] ENGINE Bus STARTED 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: audit 2026-03-09T14:29:34.251903+0000 mgr.y (mgr.14120) 4 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: audit 2026-03-09T14:29:34.255438+0000 mgr.y (mgr.14120) 5 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: cluster 2026-03-09T14:29:34.255648+0000 mon.a (mon.0) 51 : cluster [DBG] mgrmap e7: y(active, since 1.06675s) 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: audit 2026-03-09T14:29:34.511990+0000 mgr.y (mgr.14120) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: audit 2026-03-09T14:29:34.517499+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: audit 2026-03-09T14:29:34.556926+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: audit 2026-03-09T14:29:35.059361+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:35 vm07 bash[17480]: audit 2026-03-09T14:29:35.062024+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: Generating public/private rsa key pair. 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: Your identification has been saved in /tmp/tmp8un41x1a/key. 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: Your public key has been saved in /tmp/tmp8un41x1a/key.pub. 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: The key fingerprint is: 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: SHA256:kTKvGZhRnjh6O5zX2znqOTLc2U48b88clDwsADT6S70 ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: The key's randomart image is: 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: +---[RSA 3072]----+ 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: | ..+ | 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: | + o + | 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: | + * o . | 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: | . = = o . o . | 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: | . + . S . . * | 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: | o o * o . o . | 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: | =.+.ooE . | 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: | o+ +=o+.o . | 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: | +=++o..+ | 2026-03-09T14:29:35.313 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:35 vm07 bash[17785]: +----[SHA256]-----+ 2026-03-09T14:29:35.340 INFO:teuthology.orchestra.run.vm07.stderr:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T14:29:35.340 INFO:teuthology.orchestra.run.vm07.stderr:Adding key to root@localhost authorized_keys... 2026-03-09T14:29:35.340 INFO:teuthology.orchestra.run.vm07.stderr:Adding host vm07... 2026-03-09T14:29:36.163 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: Added host 'vm07' with addr '192.168.123.107' 2026-03-09T14:29:36.219 INFO:teuthology.orchestra.run.vm07.stderr:Deploying unmanaged mon service... 2026-03-09T14:29:36.480 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: Scheduled mon update... 2026-03-09T14:29:36.515 INFO:teuthology.orchestra.run.vm07.stderr:Deploying unmanaged mgr service... 2026-03-09T14:29:36.736 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: Scheduled mgr update... 2026-03-09T14:29:36.747 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:36 vm07 bash[17480]: audit 2026-03-09T14:29:34.773060+0000 mgr.y (mgr.14120) 7 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:36.747 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:36 vm07 bash[17480]: audit 2026-03-09T14:29:35.020554+0000 mgr.y (mgr.14120) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:36.747 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:36 vm07 bash[17480]: cephadm 2026-03-09T14:29:35.020792+0000 mgr.y (mgr.14120) 9 : cephadm [INF] Generating ssh key... 2026-03-09T14:29:36.747 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:36 vm07 bash[17480]: audit 2026-03-09T14:29:35.299029+0000 mgr.y (mgr.14120) 10 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:36.747 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:36 vm07 bash[17480]: audit 2026-03-09T14:29:35.558875+0000 mgr.y (mgr.14120) 11 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "addr": "192.168.123.107", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:36.747 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:36 vm07 bash[17480]: cluster 2026-03-09T14:29:36.066423+0000 mon.a (mon.0) 56 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-09T14:29:36.747 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:36 vm07 bash[17480]: audit 2026-03-09T14:29:36.155831+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:36.747 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:36 vm07 bash[17480]: audit 2026-03-09T14:29:36.179031+0000 mon.a (mon.0) 58 : audit [DBG] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:36.747 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:36 vm07 bash[17480]: audit 2026-03-09T14:29:36.475050+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:37.288 INFO:teuthology.orchestra.run.vm07.stderr:Enabling the dashboard module... 2026-03-09T14:29:37.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:37 vm07 bash[17480]: cephadm 2026-03-09T14:29:35.906689+0000 mgr.y (mgr.14120) 12 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-09T14:29:37.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:37 vm07 bash[17480]: cephadm 2026-03-09T14:29:36.156263+0000 mgr.y (mgr.14120) 13 : cephadm [INF] Added host vm07 2026-03-09T14:29:37.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:37 vm07 bash[17480]: audit 2026-03-09T14:29:36.471321+0000 mgr.y (mgr.14120) 14 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:37.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:37 vm07 bash[17480]: cephadm 2026-03-09T14:29:36.472146+0000 mgr.y (mgr.14120) 15 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T14:29:37.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:37 vm07 bash[17480]: audit 2026-03-09T14:29:36.731405+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:37.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:37 vm07 bash[17480]: audit 2026-03-09T14:29:36.988481+0000 mon.a (mon.0) 61 : audit [INF] from='client.? 192.168.123.107:0/987908406' entity='client.admin' 2026-03-09T14:29:37.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:37 vm07 bash[17480]: audit 2026-03-09T14:29:37.246420+0000 mon.a (mon.0) 62 : audit [INF] from='client.? 192.168.123.107:0/2800315633' entity='client.admin' 2026-03-09T14:29:38.891 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: { 2026-03-09T14:29:38.892 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "epoch": 9, 2026-03-09T14:29:38.892 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "available": true, 2026-03-09T14:29:38.892 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "active_name": "y", 2026-03-09T14:29:38.892 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "num_standby": 0 2026-03-09T14:29:38.892 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: } 2026-03-09T14:29:38.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:38 vm07 bash[17480]: audit 2026-03-09T14:29:36.727894+0000 mgr.y (mgr.14120) 16 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:38.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:38 vm07 bash[17480]: cephadm 2026-03-09T14:29:36.728550+0000 mgr.y (mgr.14120) 17 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T14:29:38.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:38 vm07 bash[17480]: audit 2026-03-09T14:29:37.570848+0000 mon.a (mon.0) 63 : audit [INF] from='client.? 192.168.123.107:0/344500422' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T14:29:38.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:38 vm07 bash[17480]: audit 2026-03-09T14:29:37.580651+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:38.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:38 vm07 bash[17480]: audit 2026-03-09T14:29:37.693803+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.14120 192.168.123.107:0/392955663' entity='mgr.y' 2026-03-09T14:29:38.901 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:38 vm07 bash[17785]: ignoring --setuser ceph since I am not root 2026-03-09T14:29:38.901 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:38 vm07 bash[17785]: ignoring --setgroup ceph since I am not root 2026-03-09T14:29:38.901 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:38 vm07 bash[17785]: debug 2026-03-09T14:29:38.696+0000 7f57cf7ab000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:29:38.901 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:38 vm07 bash[17785]: debug 2026-03-09T14:29:38.740+0000 7f57cf7ab000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:29:38.938 INFO:teuthology.orchestra.run.vm07.stderr:Waiting for the mgr to restart... 2026-03-09T14:29:38.938 INFO:teuthology.orchestra.run.vm07.stderr:Waiting for mgr epoch 9... 2026-03-09T14:29:39.167 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:39 vm07 bash[17785]: debug 2026-03-09T14:29:39.044+0000 7f57cf7ab000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:29:39.564 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:39 vm07 bash[17785]: debug 2026-03-09T14:29:39.476+0000 7f57cf7ab000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:29:39.821 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:39 vm07 bash[17480]: audit 2026-03-09T14:29:38.560831+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.107:0/344500422' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T14:29:39.821 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:39 vm07 bash[17480]: cluster 2026-03-09T14:29:38.560915+0000 mon.a (mon.0) 67 : cluster [DBG] mgrmap e9: y(active, since 5s) 2026-03-09T14:29:39.821 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:39 vm07 bash[17480]: audit 2026-03-09T14:29:38.886933+0000 mon.a (mon.0) 68 : audit [DBG] from='client.? 192.168.123.107:0/1638389091' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T14:29:39.821 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:39 vm07 bash[17785]: debug 2026-03-09T14:29:39.556+0000 7f57cf7ab000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:29:39.821 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:39 vm07 bash[17785]: debug 2026-03-09T14:29:39.724+0000 7f57cf7ab000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:29:40.099 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:39 vm07 bash[17785]: debug 2026-03-09T14:29:39.812+0000 7f57cf7ab000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:29:40.099 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:39 vm07 bash[17785]: debug 2026-03-09T14:29:39.860+0000 7f57cf7ab000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:29:40.099 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:39 vm07 bash[17785]: debug 2026-03-09T14:29:39.980+0000 7f57cf7ab000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:29:40.099 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:40 vm07 bash[17785]: debug 2026-03-09T14:29:40.032+0000 7f57cf7ab000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:29:40.417 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:40 vm07 bash[17785]: debug 2026-03-09T14:29:40.092+0000 7f57cf7ab000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:29:40.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:40 vm07 bash[17785]: debug 2026-03-09T14:29:40.544+0000 7f57cf7ab000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:29:40.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:40 vm07 bash[17785]: debug 2026-03-09T14:29:40.592+0000 7f57cf7ab000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:29:40.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:40 vm07 bash[17785]: debug 2026-03-09T14:29:40.640+0000 7f57cf7ab000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:29:41.390 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:40 vm07 bash[17785]: debug 2026-03-09T14:29:40.916+0000 7f57cf7ab000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:29:41.390 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:40 vm07 bash[17785]: debug 2026-03-09T14:29:40.968+0000 7f57cf7ab000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:29:41.390 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:41 vm07 bash[17785]: debug 2026-03-09T14:29:41.016+0000 7f57cf7ab000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:29:41.390 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:41 vm07 bash[17785]: debug 2026-03-09T14:29:41.088+0000 7f57cf7ab000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:29:41.390 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:41 vm07 bash[17785]: debug 2026-03-09T14:29:41.380+0000 7f57cf7ab000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:29:41.662 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:41 vm07 bash[17785]: debug 2026-03-09T14:29:41.548+0000 7f57cf7ab000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:29:41.662 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:41 vm07 bash[17785]: debug 2026-03-09T14:29:41.596+0000 7f57cf7ab000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:29:41.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:41 vm07 bash[17785]: debug 2026-03-09T14:29:41.652+0000 7f57cf7ab000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:29:41.918 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:41 vm07 bash[17785]: debug 2026-03-09T14:29:41.784+0000 7f57cf7ab000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:29:42.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:42 vm07 bash[17480]: cluster 2026-03-09T14:29:42.227286+0000 mon.a (mon.0) 69 : cluster [INF] Active manager daemon y restarted 2026-03-09T14:29:42.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:42 vm07 bash[17480]: cluster 2026-03-09T14:29:42.228202+0000 mon.a (mon.0) 70 : cluster [INF] Activating manager daemon y 2026-03-09T14:29:42.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:42 vm07 bash[17480]: cluster 2026-03-09T14:29:42.230392+0000 mon.a (mon.0) 71 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T14:29:42.667 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:42 vm07 bash[17785]: debug 2026-03-09T14:29:42.224+0000 7f57cf7ab000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:29:43.167 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:42 vm07 bash[17785]: [09/Mar/2026:14:29:42] ENGINE Bus STARTING 2026-03-09T14:29:43.167 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:42 vm07 bash[17785]: [09/Mar/2026:14:29:42] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:29:43.167 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:42 vm07 bash[17785]: [09/Mar/2026:14:29:42] ENGINE Bus STARTED 2026-03-09T14:29:43.302 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: { 2026-03-09T14:29:43.302 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "mgrmap_epoch": 11, 2026-03-09T14:29:43.302 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: "initialized": true 2026-03-09T14:29:43.302 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: } 2026-03-09T14:29:43.337 INFO:teuthology.orchestra.run.vm07.stderr:mgr epoch 9 is available 2026-03-09T14:29:43.337 INFO:teuthology.orchestra.run.vm07.stderr:Generating a dashboard self-signed certificate... 2026-03-09T14:29:43.585 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: Self-signed certificate created 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: cluster 2026-03-09T14:29:42.282116+0000 mon.a (mon.0) 72 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0540043s) 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: audit 2026-03-09T14:29:42.287001+0000 mon.a (mon.0) 73 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: audit 2026-03-09T14:29:42.287621+0000 mon.a (mon.0) 74 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: audit 2026-03-09T14:29:42.288336+0000 mon.a (mon.0) 75 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: audit 2026-03-09T14:29:42.288428+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: audit 2026-03-09T14:29:42.288466+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: cluster 2026-03-09T14:29:42.292326+0000 mon.a (mon.0) 78 : cluster [INF] Manager daemon y is now available 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: audit 2026-03-09T14:29:42.307204+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: audit 2026-03-09T14:29:42.308500+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: audit 2026-03-09T14:29:42.312963+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: audit 2026-03-09T14:29:42.325774+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:29:43.593 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:43 vm07 bash[17480]: audit 2026-03-09T14:29:42.883402+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:43.618 INFO:teuthology.orchestra.run.vm07.stderr:Creating initial admin user... 2026-03-09T14:29:43.994 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: {"username": "admin", "password": "$2b$12$dr3GSFHCItY8o91KJ6uppeBhoNgJyL6HO6zhLLG6YP3TYQn0SHc0K", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773066583, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T14:29:44.028 INFO:teuthology.orchestra.run.vm07.stderr:Fetching dashboard port number... 2026-03-09T14:29:44.236 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: 8443 2026-03-09T14:29:44.268 INFO:teuthology.orchestra.run.vm07.stderr:firewalld does not appear to be present 2026-03-09T14:29:44.268 INFO:teuthology.orchestra.run.vm07.stderr:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T14:29:44.269 INFO:teuthology.orchestra.run.vm07.stderr:Ceph Dashboard is now available at: 2026-03-09T14:29:44.269 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T14:29:44.269 INFO:teuthology.orchestra.run.vm07.stderr: URL: https://vm07.local:8443/ 2026-03-09T14:29:44.269 INFO:teuthology.orchestra.run.vm07.stderr: User: admin 2026-03-09T14:29:44.269 INFO:teuthology.orchestra.run.vm07.stderr: Password: r2zzat0bqx 2026-03-09T14:29:44.269 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T14:29:44.269 INFO:teuthology.orchestra.run.vm07.stderr:Enabling autotune for osd_memory_target 2026-03-09T14:29:44.298 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: cephadm 2026-03-09T14:29:42.769677+0000 mgr.y (mgr.14152) 1 : cephadm [INF] [09/Mar/2026:14:29:42] ENGINE Bus STARTING 2026-03-09T14:29:44.298 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: cephadm 2026-03-09T14:29:42.879488+0000 mgr.y (mgr.14152) 2 : cephadm [INF] [09/Mar/2026:14:29:42] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:29:44.298 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: cephadm 2026-03-09T14:29:42.879691+0000 mgr.y (mgr.14152) 3 : cephadm [INF] [09/Mar/2026:14:29:42] ENGINE Bus STARTED 2026-03-09T14:29:44.298 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: cluster 2026-03-09T14:29:43.288987+0000 mon.a (mon.0) 84 : cluster [DBG] mgrmap e11: y(active, since 1.06088s) 2026-03-09T14:29:44.298 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: audit 2026-03-09T14:29:43.293178+0000 mgr.y (mgr.14152) 4 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T14:29:44.298 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: audit 2026-03-09T14:29:43.296962+0000 mgr.y (mgr.14152) 5 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T14:29:44.298 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: audit 2026-03-09T14:29:43.555244+0000 mgr.y (mgr.14152) 6 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:44.298 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: audit 2026-03-09T14:29:43.577067+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:44.298 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: audit 2026-03-09T14:29:43.579078+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:44.298 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: audit 2026-03-09T14:29:43.989595+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:44.299 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:44 vm07 bash[17480]: audit 2026-03-09T14:29:44.232112+0000 mon.a (mon.0) 88 : audit [DBG] from='client.? 192.168.123.107:0/845116512' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T14:29:44.810 INFO:teuthology.orchestra.run.vm07.stderr:/usr/bin/ceph: set mgr/dashboard/cluster/status 2026-03-09T14:29:44.841 INFO:teuthology.orchestra.run.vm07.stderr:You can access the Ceph CLI with: 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr: sudo /home/ubuntu/cephtest/cephadm shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr:Please consider enabling telemetry to help improve Ceph: 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr: ceph telemetry on 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr:For more information see: 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr: https://docs.ceph.com/docs/master/mgr/telemetry/ 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T14:29:44.842 INFO:teuthology.orchestra.run.vm07.stderr:Bootstrap complete. 2026-03-09T14:29:44.857 INFO:tasks.cephadm:Fetching config... 2026-03-09T14:29:44.857 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T14:29:44.857 DEBUG:teuthology.orchestra.run.vm07:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T14:29:44.859 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T14:29:44.859 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T14:29:44.859 DEBUG:teuthology.orchestra.run.vm07:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T14:29:44.903 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T14:29:44.903 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T14:29:44.903 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.a/keyring of=/dev/stdout 2026-03-09T14:29:44.950 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T14:29:44.951 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T14:29:44.951 DEBUG:teuthology.orchestra.run.vm07:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T14:29:44.997 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T14:29:44.997 DEBUG:teuthology.orchestra.run.vm07:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGYnnFFMouXWBxQFy+cGYrRVF9vWt4s0N0URN8b23t4EHODSeM6sS/7tjJswwuNm2sov25xo/smfZ39la3ki8wCPCmB5hPvLs+ZPRdV4++asTFNkIldWob3paueHpVdY5w1EGwxxM7MzWMlar1eAVHRtxC6kxpja6bnmyqWVJzx8KvOv2yzAbdn8StSa3beeEHReRFq69zw2qtxlU6a8PL0pyy5h/5GkTrk5XUW9FSsGLdYBRUzA+DafY8DOV2OCvghnvFypmclsWY1hJFNmjfdlhsTHzrYvKAto4uT3LnB4d8gRkFCPqNAKfAH41ccnFcvU4NrqphXvgdYQMXP9Pvf73hJFblOgkx415cC9jo/Kq4Zrq/b6B5GjRk32XeMrXVwujnZ0D8prgn2OAxFYAU9VdOsqDyMAhA5CM/JdHJw/hoFZP+oyWmXvOkbLr5U1dvibvhVM72HqDk4CDpnkWUpIi+bZvKmLPYIIHzPC3udFf9yyP/tQ6w0PpiTz64olU= ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T14:29:45.052 INFO:teuthology.orchestra.run.vm07.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGYnnFFMouXWBxQFy+cGYrRVF9vWt4s0N0URN8b23t4EHODSeM6sS/7tjJswwuNm2sov25xo/smfZ39la3ki8wCPCmB5hPvLs+ZPRdV4++asTFNkIldWob3paueHpVdY5w1EGwxxM7MzWMlar1eAVHRtxC6kxpja6bnmyqWVJzx8KvOv2yzAbdn8StSa3beeEHReRFq69zw2qtxlU6a8PL0pyy5h/5GkTrk5XUW9FSsGLdYBRUzA+DafY8DOV2OCvghnvFypmclsWY1hJFNmjfdlhsTHzrYvKAto4uT3LnB4d8gRkFCPqNAKfAH41ccnFcvU4NrqphXvgdYQMXP9Pvf73hJFblOgkx415cC9jo/Kq4Zrq/b6B5GjRk32XeMrXVwujnZ0D8prgn2OAxFYAU9VdOsqDyMAhA5CM/JdHJw/hoFZP+oyWmXvOkbLr5U1dvibvhVM72HqDk4CDpnkWUpIi+bZvKmLPYIIHzPC3udFf9yyP/tQ6w0PpiTz64olU= ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:45.058 DEBUG:teuthology.orchestra.run.vm11:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGYnnFFMouXWBxQFy+cGYrRVF9vWt4s0N0URN8b23t4EHODSeM6sS/7tjJswwuNm2sov25xo/smfZ39la3ki8wCPCmB5hPvLs+ZPRdV4++asTFNkIldWob3paueHpVdY5w1EGwxxM7MzWMlar1eAVHRtxC6kxpja6bnmyqWVJzx8KvOv2yzAbdn8StSa3beeEHReRFq69zw2qtxlU6a8PL0pyy5h/5GkTrk5XUW9FSsGLdYBRUzA+DafY8DOV2OCvghnvFypmclsWY1hJFNmjfdlhsTHzrYvKAto4uT3LnB4d8gRkFCPqNAKfAH41ccnFcvU4NrqphXvgdYQMXP9Pvf73hJFblOgkx415cC9jo/Kq4Zrq/b6B5GjRk32XeMrXVwujnZ0D8prgn2OAxFYAU9VdOsqDyMAhA5CM/JdHJw/hoFZP+oyWmXvOkbLr5U1dvibvhVM72HqDk4CDpnkWUpIi+bZvKmLPYIIHzPC3udFf9yyP/tQ6w0PpiTz64olU= ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T14:29:45.069 INFO:teuthology.orchestra.run.vm11.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGYnnFFMouXWBxQFy+cGYrRVF9vWt4s0N0URN8b23t4EHODSeM6sS/7tjJswwuNm2sov25xo/smfZ39la3ki8wCPCmB5hPvLs+ZPRdV4++asTFNkIldWob3paueHpVdY5w1EGwxxM7MzWMlar1eAVHRtxC6kxpja6bnmyqWVJzx8KvOv2yzAbdn8StSa3beeEHReRFq69zw2qtxlU6a8PL0pyy5h/5GkTrk5XUW9FSsGLdYBRUzA+DafY8DOV2OCvghnvFypmclsWY1hJFNmjfdlhsTHzrYvKAto4uT3LnB4d8gRkFCPqNAKfAH41ccnFcvU4NrqphXvgdYQMXP9Pvf73hJFblOgkx415cC9jo/Kq4Zrq/b6B5GjRk32XeMrXVwujnZ0D8prgn2OAxFYAU9VdOsqDyMAhA5CM/JdHJw/hoFZP+oyWmXvOkbLr5U1dvibvhVM72HqDk4CDpnkWUpIi+bZvKmLPYIIHzPC3udFf9yyP/tQ6w0PpiTz64olU= ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:29:45.074 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T14:29:45.384 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:45 vm07 bash[17480]: audit 2026-03-09T14:29:43.838874+0000 mgr.y (mgr.14152) 7 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:45.384 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:45 vm07 bash[17480]: audit 2026-03-09T14:29:44.803604+0000 mon.a (mon.0) 89 : audit [INF] from='client.? 192.168.123.107:0/1028878169' entity='client.admin' 2026-03-09T14:29:45.384 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:45 vm07 bash[17480]: cluster 2026-03-09T14:29:44.992837+0000 mon.a (mon.0) 90 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-09T14:29:45.384 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:45 vm07 bash[17480]: audit 2026-03-09T14:29:45.171359+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:45.629 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T14:29:45.630 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T14:29:46.041 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm11 2026-03-09T14:29:46.041 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-09T14:29:46.041 DEBUG:teuthology.orchestra.run.vm11:> dd of=/etc/ceph/ceph.conf 2026-03-09T14:29:46.044 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-09T14:29:46.044 DEBUG:teuthology.orchestra.run.vm11:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:46.090 INFO:tasks.cephadm:Adding host vm11 to orchestrator... 2026-03-09T14:29:46.090 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch host add vm11 2026-03-09T14:29:46.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:46 vm07 bash[17480]: audit 2026-03-09T14:29:45.473839+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:46.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:46 vm07 bash[17480]: audit 2026-03-09T14:29:45.574712+0000 mon.a (mon.0) 93 : audit [INF] from='client.? 192.168.123.107:0/23013285' entity='client.admin' 2026-03-09T14:29:46.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:46 vm07 bash[17480]: audit 2026-03-09T14:29:45.993868+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:47.023 INFO:teuthology.orchestra.run.vm07.stdout:Added host 'vm11' with addr '192.168.123.111' 2026-03-09T14:29:47.070 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch host ls --format=json 2026-03-09T14:29:47.462 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:29:47.462 INFO:teuthology.orchestra.run.vm07.stdout:[{"addr": "192.168.123.107", "hostname": "vm07", "labels": [], "status": ""}, {"addr": "192.168.123.111", "hostname": "vm11", "labels": [], "status": ""}] 2026-03-09T14:29:47.511 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T14:29:47.511 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd crush tunables default 2026-03-09T14:29:47.604 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:47 vm07 bash[17480]: audit 2026-03-09T14:29:45.991277+0000 mgr.y (mgr.14152) 8 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:47.604 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:47 vm07 bash[17480]: audit 2026-03-09T14:29:46.462608+0000 mgr.y (mgr.14152) 9 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm11", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:47.604 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:47 vm07 bash[17480]: audit 2026-03-09T14:29:47.018275+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:48.486 INFO:teuthology.orchestra.run.vm07.stderr:adjusted tunables profile to default 2026-03-09T14:29:48.543 INFO:tasks.cephadm:Adding mon.a on vm07 2026-03-09T14:29:48.543 INFO:tasks.cephadm:Adding mon.c on vm07 2026-03-09T14:29:48.543 INFO:tasks.cephadm:Adding mon.b on vm11 2026-03-09T14:29:48.543 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch apply mon '3;vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm11:192.168.123.111=b' 2026-03-09T14:29:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:48 vm07 bash[17480]: cephadm 2026-03-09T14:29:46.790788+0000 mgr.y (mgr.14152) 10 : cephadm [INF] Deploying cephadm binary to vm11 2026-03-09T14:29:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:48 vm07 bash[17480]: cephadm 2026-03-09T14:29:47.018548+0000 mgr.y (mgr.14152) 11 : cephadm [INF] Added host vm11 2026-03-09T14:29:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:48 vm07 bash[17480]: audit 2026-03-09T14:29:47.457333+0000 mgr.y (mgr.14152) 12 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:29:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:48 vm07 bash[17480]: audit 2026-03-09T14:29:47.907408+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.107:0/3999251449' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T14:29:48.931 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled mon update... 2026-03-09T14:29:48.977 DEBUG:teuthology.orchestra.run.vm07:mon.c> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.c.service 2026-03-09T14:29:48.978 DEBUG:teuthology.orchestra.run.vm11:mon.b> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.b.service 2026-03-09T14:29:48.978 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T14:29:48.979 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph mon dump -f json 2026-03-09T14:29:49.423 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-09T14:29:49.423 INFO:teuthology.orchestra.run.vm11.stdout:{"epoch":1,"fsid":"f59f9828-1bc3-11f1-bfd8-7b3d0c866040","modified":"2026-03-09T14:29:18.743288Z","created":"2026-03-09T14:29:18.743288Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:29:49.426 INFO:teuthology.orchestra.run.vm11.stderr:dumped monmap epoch 1 2026-03-09T14:29:49.740 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:49 vm07 bash[17480]: audit 2026-03-09T14:29:48.478705+0000 mon.a (mon.0) 97 : audit [INF] from='client.? 192.168.123.107:0/3999251449' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T14:29:49.740 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:49 vm07 bash[17480]: cluster 2026-03-09T14:29:48.478825+0000 mon.a (mon.0) 98 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:29:49.740 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:49 vm07 bash[17480]: cluster 2026-03-09T14:29:48.489171+0000 mon.a (mon.0) 99 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-09T14:29:49.740 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:49 vm07 bash[17480]: audit 2026-03-09T14:29:48.927206+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:49.740 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:49 vm07 bash[17480]: audit 2026-03-09T14:29:49.418990+0000 mon.a (mon.0) 101 : audit [DBG] from='client.? 192.168.123.111:0/2238189282' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:29:49.740 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:49 vm07 bash[17785]: debug 2026-03-09T14:29:49.572+0000 7f5789a1b700 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec ServiceSpec.from_json(yaml.safe_load('''service_type: mon 2026-03-09T14:29:49.740 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:49 vm07 bash[17785]: service_name: mon 2026-03-09T14:29:49.740 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:49 vm07 bash[17785]: placement: 2026-03-09T14:29:49.740 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:49 vm07 bash[17785]: count: 3 2026-03-09T14:29:49.741 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:49 vm07 bash[17785]: hosts: 2026-03-09T14:29:49.741 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:49 vm07 bash[17785]: - vm07:192.168.123.107=a 2026-03-09T14:29:49.741 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:49 vm07 bash[17785]: - vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c 2026-03-09T14:29:49.741 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:49 vm07 bash[17785]: - vm11:192.168.123.111=b 2026-03-09T14:29:49.741 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:49 vm07 bash[17785]: ''')): Cannot place on vm11: Unknown hosts 2026-03-09T14:29:50.471 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T14:29:50.471 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph mon dump -f json 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:48.922960+0000 mgr.y (mgr.14152) 13 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm11:192.168.123.111=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: cephadm 2026-03-09T14:29:48.924122+0000 mgr.y (mgr.14152) 14 : cephadm [INF] Saving service mon spec with placement vm07:192.168.123.107=a;vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c;vm11:192.168.123.111=b;count:3 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.571578+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.572031+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.574423+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: cephadm 2026-03-09T14:29:49.575097+0000 mgr.y (mgr.14152) 15 : cephadm [ERR] Failed to apply mon spec ServiceSpec.from_json(yaml.safe_load('''service_type: mon 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: service_name: mon 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: placement: 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: count: 3 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: hosts: 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: - vm07:192.168.123.107=a 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: - vm07:[v2:192.168.123.107:3301,v1:192.168.123.107:6790]=c 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: - vm11:192.168.123.111=b 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: ''')): Cannot place on vm11: Unknown hosts 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: cephadm 2026-03-09T14:29:49.575203+0000 mgr.y (mgr.14152) 16 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.575399+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.576311+0000 mon.a (mon.0) 106 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.576709+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: cephadm 2026-03-09T14:29:49.577144+0000 mgr.y (mgr.14152) 17 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.793006+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.793562+0000 mon.a (mon.0) 109 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.794307+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.794716+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.897369+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:49.918933+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:50 vm07 bash[17480]: audit 2026-03-09T14:29:50.524881+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:50.936 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-09T14:29:50.936 INFO:teuthology.orchestra.run.vm11.stdout:{"epoch":1,"fsid":"f59f9828-1bc3-11f1-bfd8-7b3d0c866040","modified":"2026-03-09T14:29:18.743288Z","created":"2026-03-09T14:29:18.743288Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:29:50.938 INFO:teuthology.orchestra.run.vm11.stderr:dumped monmap epoch 1 2026-03-09T14:29:51.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:51 vm07 bash[17480]: cephadm 2026-03-09T14:29:49.795330+0000 mgr.y (mgr.14152) 18 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:29:51.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:51 vm07 bash[17480]: cephadm 2026-03-09T14:29:49.847430+0000 mgr.y (mgr.14152) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:29:51.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:51 vm07 bash[17480]: audit 2026-03-09T14:29:50.819026+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:51.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:51 vm07 bash[17480]: audit 2026-03-09T14:29:50.932073+0000 mon.a (mon.0) 116 : audit [DBG] from='client.? 192.168.123.111:0/1813325539' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:29:51.985 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T14:29:51.985 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph mon dump -f json 2026-03-09T14:29:52.424 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-09T14:29:52.425 INFO:teuthology.orchestra.run.vm11.stdout:{"epoch":1,"fsid":"f59f9828-1bc3-11f1-bfd8-7b3d0c866040","modified":"2026-03-09T14:29:18.743288Z","created":"2026-03-09T14:29:18.743288Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T14:29:52.428 INFO:teuthology.orchestra.run.vm11.stderr:dumped monmap epoch 1 2026-03-09T14:29:52.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:52 vm07 bash[17480]: audit 2026-03-09T14:29:52.420383+0000 mon.a (mon.0) 117 : audit [DBG] from='client.? 192.168.123.111:0/2715479042' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:29:53.472 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T14:29:53.472 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph mon dump -f json 2026-03-09T14:29:53.559 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:53 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:53.559 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:53 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:53.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:53 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:53.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:29:53 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:29:53.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:53 vm07 bash[22585]: debug 2026-03-09T14:29:53.736+0000 7f6fab951700 10 mon.c@-1(synchronizing) e1 handle_conf_change mon_allow_pool_delete,mon_cluster_log_to_file 2026-03-09T14:29:55.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:29:55 vm11 bash[17885]: debug 2026-03-09T14:29:55.038+0000 7f464bf42700 10 mon.b@-1(synchronizing) e2 handle_conf_change mon_allow_pool_delete,mon_cluster_log_to_file 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: cephadm 2026-03-09T14:29:53.629307+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Deploying daemon mon.b on vm11 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:53.745061+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: cluster 2026-03-09T14:29:53.746106+0000 mon.a (mon.0) 129 : cluster [INF] mon.a calling monitor election 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:53.747731+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:54.739659+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:55.043248+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:55.739608+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: cluster 2026-03-09T14:29:55.742898+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:56.043345+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:56.739877+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:57.043308+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:57.739848+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:58.043930+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:58.739843+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: cluster 2026-03-09T14:29:58.750382+0000 mon.a (mon.0) 140 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: cluster 2026-03-09T14:29:58.755298+0000 mon.a (mon.0) 141 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: cluster 2026-03-09T14:29:58.755392+0000 mon.a (mon.0) 142 : cluster [DBG] fsmap 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: cluster 2026-03-09T14:29:58.755477+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: cluster 2026-03-09T14:29:58.755815+0000 mon.a (mon.0) 144 : cluster [DBG] mgrmap e13: y(active, since 16s) 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: cluster 2026-03-09T14:29:58.760481+0000 mon.a (mon.0) 145 : cluster [INF] overall HEALTH_OK 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:58.763597+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:58.765118+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:58.766005+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:29:58 vm07 bash[17480]: audit 2026-03-09T14:29:58.766713+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: cephadm 2026-03-09T14:29:53.629307+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Deploying daemon mon.b on vm11 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:53.745061+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: cluster 2026-03-09T14:29:53.746106+0000 mon.a (mon.0) 129 : cluster [INF] mon.a calling monitor election 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:53.747731+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:54.739659+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:55.043248+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:55.739608+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: cluster 2026-03-09T14:29:55.742898+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:56.043345+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:56.739877+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:57.043308+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:57.739848+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:58.043930+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:58.739843+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: cluster 2026-03-09T14:29:58.750382+0000 mon.a (mon.0) 140 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: cluster 2026-03-09T14:29:58.755298+0000 mon.a (mon.0) 141 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: cluster 2026-03-09T14:29:58.755392+0000 mon.a (mon.0) 142 : cluster [DBG] fsmap 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: cluster 2026-03-09T14:29:58.755477+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: cluster 2026-03-09T14:29:58.755815+0000 mon.a (mon.0) 144 : cluster [DBG] mgrmap e13: y(active, since 16s) 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: cluster 2026-03-09T14:29:58.760481+0000 mon.a (mon.0) 145 : cluster [INF] overall HEALTH_OK 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:58.763597+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:29:59.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:58.765118+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:29:59.169 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:58.766005+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:29:59.169 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:29:58 vm07 bash[22585]: audit 2026-03-09T14:29:58.766713+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: cluster 2026-03-09T14:29:59.049693+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: cluster 2026-03-09T14:29:59.051147+0000 mon.a (mon.0) 151 : cluster [INF] mon.a calling monitor election 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:29:59.053208+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:29:59.053502+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:29:59.053769+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:30:00.044008+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:30:01.044117+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:30:02.044179+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: cluster 2026-03-09T14:30:02.954486+0000 mgr.y (mgr.14152) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:30:03.044211+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:30:04.044305+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: cluster 2026-03-09T14:30:04.053758+0000 mon.a (mon.0) 160 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: cluster 2026-03-09T14:30:04.056282+0000 mon.a (mon.0) 161 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],b=[v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: cluster 2026-03-09T14:30:04.056326+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: cluster 2026-03-09T14:30:04.056346+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: cluster 2026-03-09T14:30:04.056445+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e13: y(active, since 21s) 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: cluster 2026-03-09T14:30:04.059074+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:30:04.062473+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:30:04.066024+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:04.329 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:04 vm07 bash[17480]: audit 2026-03-09T14:30:04.074569+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: cluster 2026-03-09T14:29:59.049693+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: cluster 2026-03-09T14:29:59.051147+0000 mon.a (mon.0) 151 : cluster [INF] mon.a calling monitor election 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:29:59.053208+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:29:59.053502+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:29:59.053769+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:30:00.044008+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:30:01.044117+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:30:02.044179+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: cluster 2026-03-09T14:30:02.954486+0000 mgr.y (mgr.14152) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:30:03.044211+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:30:04.044305+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: cluster 2026-03-09T14:30:04.053758+0000 mon.a (mon.0) 160 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: cluster 2026-03-09T14:30:04.056282+0000 mon.a (mon.0) 161 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],b=[v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: cluster 2026-03-09T14:30:04.056326+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: cluster 2026-03-09T14:30:04.056346+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: cluster 2026-03-09T14:30:04.056445+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e13: y(active, since 21s) 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: cluster 2026-03-09T14:30:04.059074+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:30:04.062473+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:30:04.066024+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:04.330 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:04 vm07 bash[22585]: audit 2026-03-09T14:30:04.074569+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:04.351 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-09T14:30:04.351 INFO:teuthology.orchestra.run.vm11.stdout:{"epoch":3,"fsid":"f59f9828-1bc3-11f1-bfd8-7b3d0c866040","modified":"2026-03-09T14:29:59.044579Z","created":"2026-03-09T14:29:18.743288Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3301","nonce":0},{"type":"v1","addr":"192.168.123.107:6790","nonce":0}]},"addr":"192.168.123.107:6790/0","public_addr":"192.168.123.107:6790/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:3300","nonce":0},{"type":"v1","addr":"192.168.123.111:6789","nonce":0}]},"addr":"192.168.123.111:6789/0","public_addr":"192.168.123.111:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T14:30:04.354 INFO:teuthology.orchestra.run.vm11.stderr:dumped monmap epoch 3 2026-03-09T14:30:04.414 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T14:30:04.414 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph config generate-minimal-conf 2026-03-09T14:30:04.903 INFO:teuthology.orchestra.run.vm07.stdout:# minimal ceph.conf for f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:30:04.903 INFO:teuthology.orchestra.run.vm07.stdout:[global] 2026-03-09T14:30:04.903 INFO:teuthology.orchestra.run.vm07.stdout: fsid = f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:30:04.903 INFO:teuthology.orchestra.run.vm07.stdout: mon_host = [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] 2026-03-09T14:30:04.968 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T14:30:04.968 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T14:30:04.968 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T14:30:04.975 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T14:30:04.975 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:30:05.023 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-09T14:30:05.023 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T14:30:05.029 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-09T14:30:05.029 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:30:05.077 INFO:tasks.cephadm:Adding mgr.y on vm07 2026-03-09T14:30:05.077 INFO:tasks.cephadm:Adding mgr.x on vm11 2026-03-09T14:30:05.077 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch apply mgr '2;vm07=y;vm11=x' 2026-03-09T14:30:05.138 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: cephadm 2026-03-09T14:30:04.063080+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:30:05.138 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: cephadm 2026-03-09T14:30:04.066411+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:30:05.138 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: cephadm 2026-03-09T14:30:04.118894+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:30:05.138 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.131187+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.139 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.171483+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.139 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.174694+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.139 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: cephadm 2026-03-09T14:30:04.175505+0000 mgr.y (mgr.14152) 26 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.176245+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.176790+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.177204+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: cephadm 2026-03-09T14:30:04.177749+0000 mgr.y (mgr.14152) 27 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.346648+0000 mon.a (mon.0) 175 : audit [DBG] from='client.? 192.168.123.111:0/3371081442' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.404818+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: cephadm 2026-03-09T14:30:04.405801+0000 mgr.y (mgr.14152) 28 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.406811+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.407564+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.408197+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: cephadm 2026-03-09T14:30:04.408889+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.641562+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.643418+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.644040+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.644618+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.861883+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.862742+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.864726+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.865312+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.898087+0000 mon.a (mon.0) 188 : audit [DBG] from='client.? 192.168.123.107:0/235206320' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.932357+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.938135+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:04.941403+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:05 vm07 bash[17480]: audit 2026-03-09T14:30:05.044213+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: cephadm 2026-03-09T14:30:04.063080+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: cephadm 2026-03-09T14:30:04.066411+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: cephadm 2026-03-09T14:30:04.118894+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.131187+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.171483+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.174694+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: cephadm 2026-03-09T14:30:04.175505+0000 mgr.y (mgr.14152) 26 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.176245+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.176790+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.177204+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: cephadm 2026-03-09T14:30:04.177749+0000 mgr.y (mgr.14152) 27 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.346648+0000 mon.a (mon.0) 175 : audit [DBG] from='client.? 192.168.123.111:0/3371081442' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.404818+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: cephadm 2026-03-09T14:30:04.405801+0000 mgr.y (mgr.14152) 28 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.406811+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.407564+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.408197+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:05.290 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: cephadm 2026-03-09T14:30:04.408889+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.641562+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.643418+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.644040+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.644618+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.861883+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.862742+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.864726+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.865312+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.898087+0000 mon.a (mon.0) 188 : audit [DBG] from='client.? 192.168.123.107:0/235206320' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.932357+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.938135+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:04.941403+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:05.291 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:05 vm07 bash[22585]: audit 2026-03-09T14:30:05.044213+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:05.500 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled mgr update... 2026-03-09T14:30:05.550 DEBUG:teuthology.orchestra.run.vm11:mgr.x> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.x.service 2026-03-09T14:30:05.551 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T14:30:05.551 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T14:30:05.551 DEBUG:teuthology.orchestra.run.vm07:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T14:30:05.554 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:30:05.554 DEBUG:teuthology.orchestra.run.vm07:> ls /dev/[sv]d? 2026-03-09T14:30:05.600 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vda 2026-03-09T14:30:05.600 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdb 2026-03-09T14:30:05.600 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdc 2026-03-09T14:30:05.600 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdd 2026-03-09T14:30:05.600 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vde 2026-03-09T14:30:05.600 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T14:30:05.600 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T14:30:05.600 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdb 2026-03-09T14:30:05.644 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdb 2026-03-09T14:30:05.644 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:30:05.644 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T14:30:05.644 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:30:05.644 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 14:29:49.180603054 +0000 2026-03-09T14:30:05.644 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 14:29:48.380603054 +0000 2026-03-09T14:30:05.644 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 14:29:48.380603054 +0000 2026-03-09T14:30:05.644 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-09T14:30:05.644 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T14:30:05.691 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T14:30:05.691 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T14:30:05.691 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000124492 s, 4.1 MB/s 2026-03-09T14:30:05.692 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T14:30:05.737 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdc 2026-03-09T14:30:05.780 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdc 2026-03-09T14:30:05.780 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:30:05.780 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T14:30:05.780 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:30:05.780 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 14:29:49.272603054 +0000 2026-03-09T14:30:05.780 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 14:29:48.384603054 +0000 2026-03-09T14:30:05.780 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 14:29:48.384603054 +0000 2026-03-09T14:30:05.780 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-09T14:30:05.780 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T14:30:05.827 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T14:30:05.827 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T14:30:05.827 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 9.8324e-05 s, 5.2 MB/s 2026-03-09T14:30:05.827 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T14:30:05.873 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdd 2026-03-09T14:30:05.915 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdd 2026-03-09T14:30:05.915 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:30:05.915 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T14:30:05.915 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:30:05.916 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 14:29:49.364603054 +0000 2026-03-09T14:30:05.916 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 14:29:48.388603054 +0000 2026-03-09T14:30:05.916 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 14:29:48.388603054 +0000 2026-03-09T14:30:05.916 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-09T14:30:05.916 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T14:30:05.962 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T14:30:05.962 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T14:30:05.962 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 9.3546e-05 s, 5.5 MB/s 2026-03-09T14:30:05.963 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T14:30:06.008 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vde 2026-03-09T14:30:06.052 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vde 2026-03-09T14:30:06.052 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:30:06.052 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T14:30:06.052 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:30:06.053 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-09 14:29:49.456603054 +0000 2026-03-09T14:30:06.053 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-09 14:29:48.388603054 +0000 2026-03-09T14:30:06.053 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-09 14:29:48.388603054 +0000 2026-03-09T14:30:06.053 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-09T14:30:06.053 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T14:30:06.100 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-09T14:30:06.100 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-09T14:30:06.100 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000809444 s, 633 kB/s 2026-03-09T14:30:06.101 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:05 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cephadm 2026-03-09T14:29:53.629307+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Deploying daemon mon.b on vm11 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:53.745061+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:29:53.746106+0000 mon.a (mon.0) 129 : cluster [INF] mon.a calling monitor election 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:53.747731+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:54.739659+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:55.043248+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:55.739608+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:29:55.742898+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:56.043345+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:56.739877+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:57.043308+0000 mon.a (mon.0) 136 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:57.739848+0000 mon.a (mon.0) 137 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:58.043930+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:58.739843+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:29:58.750382+0000 mon.a (mon.0) 140 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:29:58.755298+0000 mon.a (mon.0) 141 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:29:58.755392+0000 mon.a (mon.0) 142 : cluster [DBG] fsmap 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:29:58.755477+0000 mon.a (mon.0) 143 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:29:58.755815+0000 mon.a (mon.0) 144 : cluster [DBG] mgrmap e13: y(active, since 16s) 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:29:58.760481+0000 mon.a (mon.0) 145 : cluster [INF] overall HEALTH_OK 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:58.763597+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:58.765118+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:58.766005+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:58.766713+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:29:59.049693+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:29:59.051147+0000 mon.a (mon.0) 151 : cluster [INF] mon.a calling monitor election 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:59.053208+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:59.053502+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:29:59.053769+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:00.044008+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:01.044117+0000 mon.a (mon.0) 156 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:02.044179+0000 mon.a (mon.0) 157 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:02.954486+0000 mgr.y (mgr.14152) 22 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:03.044211+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.044305+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:04.053758+0000 mon.a (mon.0) 160 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:04.056282+0000 mon.a (mon.0) 161 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],b=[v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:04.056326+0000 mon.a (mon.0) 162 : cluster [DBG] fsmap 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:04.056346+0000 mon.a (mon.0) 163 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:04.056445+0000 mon.a (mon.0) 164 : cluster [DBG] mgrmap e13: y(active, since 21s) 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:04.059074+0000 mon.a (mon.0) 165 : cluster [INF] overall HEALTH_OK 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.062473+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.066024+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.074569+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cephadm 2026-03-09T14:30:04.063080+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:30:06.111 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cephadm 2026-03-09T14:30:04.066411+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cephadm 2026-03-09T14:30:04.118894+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.131187+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.171483+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.174694+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cephadm 2026-03-09T14:30:04.175505+0000 mgr.y (mgr.14152) 26 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.176245+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.176790+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.177204+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cephadm 2026-03-09T14:30:04.177749+0000 mgr.y (mgr.14152) 27 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.346648+0000 mon.a (mon.0) 175 : audit [DBG] from='client.? 192.168.123.111:0/3371081442' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.404818+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cephadm 2026-03-09T14:30:04.405801+0000 mgr.y (mgr.14152) 28 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.406811+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.407564+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.408197+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cephadm 2026-03-09T14:30:04.408889+0000 mgr.y (mgr.14152) 29 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.641562+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.643418+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.644040+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.644618+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.861883+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.862742+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.864726+0000 mon.a (mon.0) 186 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.865312+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.898087+0000 mon.a (mon.0) 188 : audit [DBG] from='client.? 192.168.123.107:0/235206320' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.932357+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.938135+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:04.941403+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:05.044213+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:06.112 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:06 vm11 systemd[1]: Started Ceph mgr.x for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:30:06.149 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-09T14:30:06.149 DEBUG:teuthology.orchestra.run.vm11:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T14:30:06.152 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:30:06.152 DEBUG:teuthology.orchestra.run.vm11:> ls /dev/[sv]d? 2026-03-09T14:30:06.205 INFO:teuthology.orchestra.run.vm11.stdout:/dev/vda 2026-03-09T14:30:06.205 INFO:teuthology.orchestra.run.vm11.stdout:/dev/vdb 2026-03-09T14:30:06.205 INFO:teuthology.orchestra.run.vm11.stdout:/dev/vdc 2026-03-09T14:30:06.205 INFO:teuthology.orchestra.run.vm11.stdout:/dev/vdd 2026-03-09T14:30:06.206 INFO:teuthology.orchestra.run.vm11.stdout:/dev/vde 2026-03-09T14:30:06.206 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T14:30:06.206 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T14:30:06.206 DEBUG:teuthology.orchestra.run.vm11:> stat /dev/vdb 2026-03-09T14:30:06.254 INFO:teuthology.orchestra.run.vm11.stdout: File: /dev/vdb 2026-03-09T14:30:06.254 INFO:teuthology.orchestra.run.vm11.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:30:06.254 INFO:teuthology.orchestra.run.vm11.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T14:30:06.254 INFO:teuthology.orchestra.run.vm11.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:30:06.254 INFO:teuthology.orchestra.run.vm11.stdout:Access: 2026-03-09 14:29:52.538729086 +0000 2026-03-09T14:30:06.254 INFO:teuthology.orchestra.run.vm11.stdout:Modify: 2026-03-09 14:29:51.758729086 +0000 2026-03-09T14:30:06.254 INFO:teuthology.orchestra.run.vm11.stdout:Change: 2026-03-09 14:29:51.758729086 +0000 2026-03-09T14:30:06.254 INFO:teuthology.orchestra.run.vm11.stdout: Birth: - 2026-03-09T14:30:06.254 DEBUG:teuthology.orchestra.run.vm11:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T14:30:06.301 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records in 2026-03-09T14:30:06.301 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records out 2026-03-09T14:30:06.301 INFO:teuthology.orchestra.run.vm11.stderr:512 bytes copied, 0.000165029 s, 3.1 MB/s 2026-03-09T14:30:06.302 DEBUG:teuthology.orchestra.run.vm11:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T14:30:06.352 DEBUG:teuthology.orchestra.run.vm11:> stat /dev/vdc 2026-03-09T14:30:06.398 INFO:teuthology.orchestra.run.vm11.stdout: File: /dev/vdc 2026-03-09T14:30:06.398 INFO:teuthology.orchestra.run.vm11.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:30:06.398 INFO:teuthology.orchestra.run.vm11.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T14:30:06.398 INFO:teuthology.orchestra.run.vm11.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:30:06.398 INFO:teuthology.orchestra.run.vm11.stdout:Access: 2026-03-09 14:29:52.634729086 +0000 2026-03-09T14:30:06.398 INFO:teuthology.orchestra.run.vm11.stdout:Modify: 2026-03-09 14:29:51.758729086 +0000 2026-03-09T14:30:06.398 INFO:teuthology.orchestra.run.vm11.stdout:Change: 2026-03-09 14:29:51.758729086 +0000 2026-03-09T14:30:06.398 INFO:teuthology.orchestra.run.vm11.stdout: Birth: - 2026-03-09T14:30:06.398 DEBUG:teuthology.orchestra.run.vm11:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T14:30:06.444 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:06 vm11 bash[18539]: debug 2026-03-09T14:30:06.238+0000 7f402c2fc700 1 -- 192.168.123.111:0/687496136 <== mon.0 v2:192.168.123.107:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (secure 0 0 0) 0x5639bb388340 con 0x5639bc104400 2026-03-09T14:30:06.444 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:06 vm11 bash[18539]: debug 2026-03-09T14:30:06.306+0000 7f4034f6b000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:30:06.444 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:06 vm11 bash[18539]: debug 2026-03-09T14:30:06.350+0000 7f4034f6b000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:30:06.446 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records in 2026-03-09T14:30:06.446 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records out 2026-03-09T14:30:06.446 INFO:teuthology.orchestra.run.vm11.stderr:512 bytes copied, 0.000168104 s, 3.0 MB/s 2026-03-09T14:30:06.446 DEBUG:teuthology.orchestra.run.vm11:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T14:30:06.496 DEBUG:teuthology.orchestra.run.vm11:> stat /dev/vdd 2026-03-09T14:30:06.542 INFO:teuthology.orchestra.run.vm11.stdout: File: /dev/vdd 2026-03-09T14:30:06.542 INFO:teuthology.orchestra.run.vm11.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:30:06.542 INFO:teuthology.orchestra.run.vm11.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T14:30:06.542 INFO:teuthology.orchestra.run.vm11.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:30:06.542 INFO:teuthology.orchestra.run.vm11.stdout:Access: 2026-03-09 14:29:52.726729086 +0000 2026-03-09T14:30:06.542 INFO:teuthology.orchestra.run.vm11.stdout:Modify: 2026-03-09 14:29:51.762729086 +0000 2026-03-09T14:30:06.542 INFO:teuthology.orchestra.run.vm11.stdout:Change: 2026-03-09 14:29:51.762729086 +0000 2026-03-09T14:30:06.542 INFO:teuthology.orchestra.run.vm11.stdout: Birth: - 2026-03-09T14:30:06.542 DEBUG:teuthology.orchestra.run.vm11:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T14:30:06.590 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records in 2026-03-09T14:30:06.591 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records out 2026-03-09T14:30:06.591 INFO:teuthology.orchestra.run.vm11.stderr:512 bytes copied, 0.000134572 s, 3.8 MB/s 2026-03-09T14:30:06.591 DEBUG:teuthology.orchestra.run.vm11:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T14:30:06.635 DEBUG:teuthology.orchestra.run.vm11:> stat /dev/vde 2026-03-09T14:30:06.678 INFO:teuthology.orchestra.run.vm11.stdout: File: /dev/vde 2026-03-09T14:30:06.678 INFO:teuthology.orchestra.run.vm11.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T14:30:06.678 INFO:teuthology.orchestra.run.vm11.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T14:30:06.678 INFO:teuthology.orchestra.run.vm11.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T14:30:06.678 INFO:teuthology.orchestra.run.vm11.stdout:Access: 2026-03-09 14:29:52.818729086 +0000 2026-03-09T14:30:06.678 INFO:teuthology.orchestra.run.vm11.stdout:Modify: 2026-03-09 14:29:51.758729086 +0000 2026-03-09T14:30:06.678 INFO:teuthology.orchestra.run.vm11.stdout:Change: 2026-03-09 14:29:51.758729086 +0000 2026-03-09T14:30:06.678 INFO:teuthology.orchestra.run.vm11.stdout: Birth: - 2026-03-09T14:30:06.679 DEBUG:teuthology.orchestra.run.vm11:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T14:30:06.727 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records in 2026-03-09T14:30:06.727 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records out 2026-03-09T14:30:06.727 INFO:teuthology.orchestra.run.vm11.stderr:512 bytes copied, 0.00052416 s, 977 kB/s 2026-03-09T14:30:06.728 DEBUG:teuthology.orchestra.run.vm11:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T14:30:06.776 INFO:tasks.cephadm:Deploying osd.0 on vm07 with /dev/vde... 2026-03-09T14:30:06.776 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- lvm zap /dev/vde 2026-03-09T14:30:06.950 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:06 vm11 bash[18539]: debug 2026-03-09T14:30:06.630+0000 7f4034f6b000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:30:07.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: cluster 2026-03-09T14:30:01.046156+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T14:30:07.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: cluster 2026-03-09T14:30:06.059475+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:30:07.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: cluster 2026-03-09T14:30:06.059946+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-09T14:30:07.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: cluster 2026-03-09T14:30:06.059963+0000 mon.a (mon.0) 203 : cluster [INF] mon.a calling monitor election 2026-03-09T14:30:07.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: cluster 2026-03-09T14:30:06.062503+0000 mon.a (mon.0) 204 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: cluster 2026-03-09T14:30:06.068149+0000 mon.a (mon.0) 205 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],b=[v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: cluster 2026-03-09T14:30:06.068272+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: cluster 2026-03-09T14:30:06.068349+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: cluster 2026-03-09T14:30:06.068587+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e13: y(active, since 23s) 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: cluster 2026-03-09T14:30:06.073660+0000 mon.a (mon.0) 209 : cluster [INF] overall HEALTH_OK 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: audit 2026-03-09T14:30:06.096297+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: audit 2026-03-09T14:30:06.098010+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: audit 2026-03-09T14:30:06.098608+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:06 vm07 bash[22585]: audit 2026-03-09T14:30:06.098988+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: cluster 2026-03-09T14:30:01.046156+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: cluster 2026-03-09T14:30:06.059475+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: cluster 2026-03-09T14:30:06.059946+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: cluster 2026-03-09T14:30:06.059963+0000 mon.a (mon.0) 203 : cluster [INF] mon.a calling monitor election 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: cluster 2026-03-09T14:30:06.062503+0000 mon.a (mon.0) 204 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: cluster 2026-03-09T14:30:06.068149+0000 mon.a (mon.0) 205 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],b=[v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: cluster 2026-03-09T14:30:06.068272+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: cluster 2026-03-09T14:30:06.068349+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: cluster 2026-03-09T14:30:06.068587+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e13: y(active, since 23s) 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: cluster 2026-03-09T14:30:06.073660+0000 mon.a (mon.0) 209 : cluster [INF] overall HEALTH_OK 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: audit 2026-03-09T14:30:06.096297+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: audit 2026-03-09T14:30:06.098010+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: audit 2026-03-09T14:30:06.098608+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:07.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:06 vm07 bash[17480]: audit 2026-03-09T14:30:06.098988+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:01.046156+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:06.059475+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:06.059946+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:06.059963+0000 mon.a (mon.0) 203 : cluster [INF] mon.a calling monitor election 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:06.062503+0000 mon.a (mon.0) 204 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:06.068149+0000 mon.a (mon.0) 205 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],b=[v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:06.068272+0000 mon.a (mon.0) 206 : cluster [DBG] fsmap 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:06.068349+0000 mon.a (mon.0) 207 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:06.068587+0000 mon.a (mon.0) 208 : cluster [DBG] mgrmap e13: y(active, since 23s) 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: cluster 2026-03-09T14:30:06.073660+0000 mon.a (mon.0) 209 : cluster [INF] overall HEALTH_OK 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:06.096297+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:06.098010+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:06.098608+0000 mon.a (mon.0) 212 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:06 vm11 bash[17885]: audit 2026-03-09T14:30:06.098988+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:07.215 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:07 vm11 bash[18539]: debug 2026-03-09T14:30:07.118+0000 7f4034f6b000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:30:07.357 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:30:07.369 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch daemon add osd vm07:/dev/vde 2026-03-09T14:30:07.511 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:07 vm11 bash[18539]: debug 2026-03-09T14:30:07.206+0000 7f4034f6b000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:30:07.511 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:07 vm11 bash[18539]: debug 2026-03-09T14:30:07.406+0000 7f4034f6b000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:30:07.814 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:07 vm11 bash[18539]: debug 2026-03-09T14:30:07.506+0000 7f4034f6b000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:30:07.814 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:07 vm11 bash[18539]: debug 2026-03-09T14:30:07.558+0000 7f4034f6b000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:30:07.814 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:07 vm11 bash[18539]: debug 2026-03-09T14:30:07.690+0000 7f4034f6b000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:30:07.814 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:07 vm11 bash[18539]: debug 2026-03-09T14:30:07.742+0000 7f4034f6b000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:30:08.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:07 vm07 bash[17480]: cluster 2026-03-09T14:30:06.955279+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:08.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:07 vm07 bash[17480]: audit 2026-03-09T14:30:07.044737+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:08.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:07 vm07 bash[17480]: audit 2026-03-09T14:30:07.762377+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:08.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:07 vm07 bash[17480]: audit 2026-03-09T14:30:07.764029+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:08.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:07 vm07 bash[17480]: audit 2026-03-09T14:30:07.764440+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:08.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:07 vm07 bash[22585]: cluster 2026-03-09T14:30:06.955279+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:08.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:07 vm07 bash[22585]: audit 2026-03-09T14:30:07.044737+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:08.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:07 vm07 bash[22585]: audit 2026-03-09T14:30:07.762377+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:08.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:07 vm07 bash[22585]: audit 2026-03-09T14:30:07.764029+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:08.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:07 vm07 bash[22585]: audit 2026-03-09T14:30:07.764440+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:08.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:07 vm11 bash[17885]: cluster 2026-03-09T14:30:06.955279+0000 mgr.y (mgr.14152) 38 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:08.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:07 vm11 bash[17885]: audit 2026-03-09T14:30:07.044737+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:08.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:07 vm11 bash[17885]: audit 2026-03-09T14:30:07.762377+0000 mon.a (mon.0) 215 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:08.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:07 vm11 bash[17885]: audit 2026-03-09T14:30:07.764029+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:08.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:07 vm11 bash[17885]: audit 2026-03-09T14:30:07.764440+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:08.261 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:07 vm11 bash[18539]: debug 2026-03-09T14:30:07.806+0000 7f4034f6b000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:30:08.672 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:08 vm11 bash[18539]: debug 2026-03-09T14:30:08.274+0000 7f4034f6b000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:30:08.672 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:08 vm11 bash[18539]: debug 2026-03-09T14:30:08.322+0000 7f4034f6b000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:30:08.672 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:08 vm11 bash[18539]: debug 2026-03-09T14:30:08.374+0000 7f4034f6b000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:30:08.923 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:08 vm11 bash[18539]: debug 2026-03-09T14:30:08.662+0000 7f4034f6b000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:30:08.923 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:08 vm11 bash[18539]: debug 2026-03-09T14:30:08.722+0000 7f4034f6b000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:30:08.923 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:08 vm11 bash[18539]: debug 2026-03-09T14:30:08.778+0000 7f4034f6b000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:30:08.923 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:08 vm11 bash[18539]: debug 2026-03-09T14:30:08.858+0000 7f4034f6b000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:30:09.261 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:09 vm11 bash[18539]: debug 2026-03-09T14:30:09.162+0000 7f4034f6b000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:30:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:08 vm11 bash[17885]: audit 2026-03-09T14:30:07.761012+0000 mgr.y (mgr.14152) 39 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:08 vm11 bash[17885]: audit 2026-03-09T14:30:08.925064+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:08 vm11 bash[17885]: audit 2026-03-09T14:30:08.929366+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:08 vm11 bash[17885]: audit 2026-03-09T14:30:08.930762+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:30:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:08 vm11 bash[17885]: audit 2026-03-09T14:30:08.931237+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:30:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:08 vm11 bash[17885]: audit 2026-03-09T14:30:08.931632+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:09.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:08 vm07 bash[22585]: audit 2026-03-09T14:30:07.761012+0000 mgr.y (mgr.14152) 39 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:09.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:08 vm07 bash[22585]: audit 2026-03-09T14:30:08.925064+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:09.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:08 vm07 bash[22585]: audit 2026-03-09T14:30:08.929366+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:09.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:08 vm07 bash[22585]: audit 2026-03-09T14:30:08.930762+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:30:09.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:08 vm07 bash[22585]: audit 2026-03-09T14:30:08.931237+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:30:09.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:08 vm07 bash[22585]: audit 2026-03-09T14:30:08.931632+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:09.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:08 vm07 bash[17480]: audit 2026-03-09T14:30:07.761012+0000 mgr.y (mgr.14152) 39 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:09.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:08 vm07 bash[17480]: audit 2026-03-09T14:30:08.925064+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:09.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:08 vm07 bash[17480]: audit 2026-03-09T14:30:08.929366+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:09.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:08 vm07 bash[17480]: audit 2026-03-09T14:30:08.930762+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:30:09.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:08 vm07 bash[17480]: audit 2026-03-09T14:30:08.931237+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:30:09.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:08 vm07 bash[17480]: audit 2026-03-09T14:30:08.931632+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:09.761 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:09 vm11 bash[18539]: debug 2026-03-09T14:30:09.330+0000 7f4034f6b000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:30:09.761 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:09 vm11 bash[18539]: debug 2026-03-09T14:30:09.382+0000 7f4034f6b000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:30:09.761 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:09 vm11 bash[18539]: debug 2026-03-09T14:30:09.434+0000 7f4034f6b000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:30:09.761 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:09 vm11 bash[18539]: debug 2026-03-09T14:30:09.566+0000 7f4034f6b000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:30:10.387 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: cephadm 2026-03-09T14:30:08.930470+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T14:30:10.387 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: cephadm 2026-03-09T14:30:08.932178+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T14:30:10.387 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: cluster 2026-03-09T14:30:08.955959+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:10.387 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: audit 2026-03-09T14:30:09.079947+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:10.387 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: audit 2026-03-09T14:30:09.152947+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:10.387 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: audit 2026-03-09T14:30:09.155212+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:10.387 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: audit 2026-03-09T14:30:09.156283+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:10.387 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: audit 2026-03-09T14:30:09.157028+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: audit 2026-03-09T14:30:09.161550+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: cluster 2026-03-09T14:30:10.015984+0000 mon.a (mon.0) 229 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: audit 2026-03-09T14:30:10.017309+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: audit 2026-03-09T14:30:10.017713+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: audit 2026-03-09T14:30:10.018480+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:10 vm07 bash[17480]: audit 2026-03-09T14:30:10.018825+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: cephadm 2026-03-09T14:30:08.930470+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: cephadm 2026-03-09T14:30:08.932178+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: cluster 2026-03-09T14:30:08.955959+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: audit 2026-03-09T14:30:09.079947+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: audit 2026-03-09T14:30:09.152947+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: audit 2026-03-09T14:30:09.155212+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: audit 2026-03-09T14:30:09.156283+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: audit 2026-03-09T14:30:09.157028+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: audit 2026-03-09T14:30:09.161550+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: cluster 2026-03-09T14:30:10.015984+0000 mon.a (mon.0) 229 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: audit 2026-03-09T14:30:10.017309+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: audit 2026-03-09T14:30:10.017713+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: audit 2026-03-09T14:30:10.018480+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:30:10.388 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:10 vm07 bash[22585]: audit 2026-03-09T14:30:10.018825+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:30:10 vm11 bash[18539]: debug 2026-03-09T14:30:10.010+0000 7f4034f6b000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: cephadm 2026-03-09T14:30:08.930470+0000 mgr.y (mgr.14152) 40 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: cephadm 2026-03-09T14:30:08.932178+0000 mgr.y (mgr.14152) 41 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: cluster 2026-03-09T14:30:08.955959+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: audit 2026-03-09T14:30:09.079947+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: audit 2026-03-09T14:30:09.152947+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: audit 2026-03-09T14:30:09.155212+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: audit 2026-03-09T14:30:09.156283+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: audit 2026-03-09T14:30:09.157028+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: audit 2026-03-09T14:30:09.161550+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: cluster 2026-03-09T14:30:10.015984+0000 mon.a (mon.0) 229 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: audit 2026-03-09T14:30:10.017309+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: audit 2026-03-09T14:30:10.017713+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: audit 2026-03-09T14:30:10.018480+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:30:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:10 vm11 bash[17885]: audit 2026-03-09T14:30:10.018825+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.? 192.168.123.111:0/3424195028' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:30:11.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:11 vm07 bash[22585]: audit 2026-03-09T14:30:10.933218+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/2177485672' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "01f1c7a2-0d56-449a-98b5-2d0134c34758"}]: dispatch 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:11 vm07 bash[22585]: audit 2026-03-09T14:30:10.933536+0000 mon.a (mon.0) 234 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "01f1c7a2-0d56-449a-98b5-2d0134c34758"}]: dispatch 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:11 vm07 bash[22585]: audit 2026-03-09T14:30:10.938276+0000 mon.a (mon.0) 235 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "01f1c7a2-0d56-449a-98b5-2d0134c34758"}]': finished 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:11 vm07 bash[22585]: cluster 2026-03-09T14:30:10.938330+0000 mon.a (mon.0) 236 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:11 vm07 bash[22585]: audit 2026-03-09T14:30:10.938427+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:11 vm07 bash[22585]: cluster 2026-03-09T14:30:10.967374+0000 mon.a (mon.0) 238 : cluster [DBG] mgrmap e14: y(active, since 28s), standbys: x 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:11 vm07 bash[22585]: audit 2026-03-09T14:30:10.967448+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:11 vm07 bash[22585]: audit 2026-03-09T14:30:11.551010+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.107:0/1008291073' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:11 vm07 bash[17480]: audit 2026-03-09T14:30:10.933218+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/2177485672' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "01f1c7a2-0d56-449a-98b5-2d0134c34758"}]: dispatch 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:11 vm07 bash[17480]: audit 2026-03-09T14:30:10.933536+0000 mon.a (mon.0) 234 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "01f1c7a2-0d56-449a-98b5-2d0134c34758"}]: dispatch 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:11 vm07 bash[17480]: audit 2026-03-09T14:30:10.938276+0000 mon.a (mon.0) 235 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "01f1c7a2-0d56-449a-98b5-2d0134c34758"}]': finished 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:11 vm07 bash[17480]: cluster 2026-03-09T14:30:10.938330+0000 mon.a (mon.0) 236 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:11 vm07 bash[17480]: audit 2026-03-09T14:30:10.938427+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:11 vm07 bash[17480]: cluster 2026-03-09T14:30:10.967374+0000 mon.a (mon.0) 238 : cluster [DBG] mgrmap e14: y(active, since 28s), standbys: x 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:11 vm07 bash[17480]: audit 2026-03-09T14:30:10.967448+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:30:11.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:11 vm07 bash[17480]: audit 2026-03-09T14:30:11.551010+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.107:0/1008291073' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:12.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:11 vm11 bash[17885]: audit 2026-03-09T14:30:10.933218+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/2177485672' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "01f1c7a2-0d56-449a-98b5-2d0134c34758"}]: dispatch 2026-03-09T14:30:12.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:11 vm11 bash[17885]: audit 2026-03-09T14:30:10.933536+0000 mon.a (mon.0) 234 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "01f1c7a2-0d56-449a-98b5-2d0134c34758"}]: dispatch 2026-03-09T14:30:12.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:11 vm11 bash[17885]: audit 2026-03-09T14:30:10.938276+0000 mon.a (mon.0) 235 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "01f1c7a2-0d56-449a-98b5-2d0134c34758"}]': finished 2026-03-09T14:30:12.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:11 vm11 bash[17885]: cluster 2026-03-09T14:30:10.938330+0000 mon.a (mon.0) 236 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T14:30:12.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:11 vm11 bash[17885]: audit 2026-03-09T14:30:10.938427+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:12.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:11 vm11 bash[17885]: cluster 2026-03-09T14:30:10.967374+0000 mon.a (mon.0) 238 : cluster [DBG] mgrmap e14: y(active, since 28s), standbys: x 2026-03-09T14:30:12.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:11 vm11 bash[17885]: audit 2026-03-09T14:30:10.967448+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:30:12.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:11 vm11 bash[17885]: audit 2026-03-09T14:30:11.551010+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.107:0/1008291073' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:12.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:12 vm07 bash[22585]: cluster 2026-03-09T14:30:10.956300+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:12.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:12 vm07 bash[17480]: cluster 2026-03-09T14:30:10.956300+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:13.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:12 vm11 bash[17885]: cluster 2026-03-09T14:30:10.956300+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:14.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:14 vm07 bash[17480]: cluster 2026-03-09T14:30:12.956504+0000 mgr.y (mgr.14152) 44 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:14.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:14 vm07 bash[22585]: cluster 2026-03-09T14:30:12.956504+0000 mgr.y (mgr.14152) 44 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:15.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:14 vm11 bash[17885]: cluster 2026-03-09T14:30:12.956504+0000 mgr.y (mgr.14152) 44 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:16.825 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:16 vm07 bash[17480]: cluster 2026-03-09T14:30:14.956711+0000 mgr.y (mgr.14152) 45 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:16.825 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:16 vm07 bash[22585]: cluster 2026-03-09T14:30:14.956711+0000 mgr.y (mgr.14152) 45 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:17.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:16 vm11 bash[17885]: cluster 2026-03-09T14:30:14.956711+0000 mgr.y (mgr.14152) 45 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:17.647 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:17 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:17.647 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:17 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:17.647 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:17 vm07 bash[17480]: audit 2026-03-09T14:30:16.859506+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:30:17.647 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:17 vm07 bash[17480]: audit 2026-03-09T14:30:16.860143+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:17.647 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:30:17 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:17.647 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:30:17 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:17.647 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:17 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:17.647 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:17 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:17.647 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:17 vm07 bash[22585]: audit 2026-03-09T14:30:16.859506+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:30:17.647 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:17 vm07 bash[22585]: audit 2026-03-09T14:30:16.860143+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:18.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:17 vm11 bash[17885]: audit 2026-03-09T14:30:16.859506+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:30:18.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:17 vm11 bash[17885]: audit 2026-03-09T14:30:16.860143+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:18 vm07 bash[17480]: cephadm 2026-03-09T14:30:16.860552+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:18 vm07 bash[17480]: cluster 2026-03-09T14:30:16.956940+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:18 vm07 bash[17480]: audit 2026-03-09T14:30:17.666721+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:18 vm07 bash[17480]: audit 2026-03-09T14:30:17.690835+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:18 vm07 bash[17480]: audit 2026-03-09T14:30:17.693318+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:18 vm07 bash[17480]: audit 2026-03-09T14:30:17.694638+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:18 vm07 bash[22585]: cephadm 2026-03-09T14:30:16.860552+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:18 vm07 bash[22585]: cluster 2026-03-09T14:30:16.956940+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:18 vm07 bash[22585]: audit 2026-03-09T14:30:17.666721+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:18 vm07 bash[22585]: audit 2026-03-09T14:30:17.690835+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:18.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:18 vm07 bash[22585]: audit 2026-03-09T14:30:17.693318+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:18.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:18 vm07 bash[22585]: audit 2026-03-09T14:30:17.694638+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:18 vm11 bash[17885]: cephadm 2026-03-09T14:30:16.860552+0000 mgr.y (mgr.14152) 46 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T14:30:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:18 vm11 bash[17885]: cluster 2026-03-09T14:30:16.956940+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:18 vm11 bash[17885]: audit 2026-03-09T14:30:17.666721+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:18 vm11 bash[17885]: audit 2026-03-09T14:30:17.690835+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:18 vm11 bash[17885]: audit 2026-03-09T14:30:17.693318+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:18 vm11 bash[17885]: audit 2026-03-09T14:30:17.694638+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:20.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:20 vm07 bash[22585]: cluster 2026-03-09T14:30:18.957166+0000 mgr.y (mgr.14152) 48 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:20.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:20 vm07 bash[22585]: audit 2026-03-09T14:30:20.569135+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:20.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:20 vm07 bash[22585]: audit 2026-03-09T14:30:20.572757+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:20.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:20 vm07 bash[17480]: cluster 2026-03-09T14:30:18.957166+0000 mgr.y (mgr.14152) 48 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:20.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:20 vm07 bash[17480]: audit 2026-03-09T14:30:20.569135+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:20.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:20 vm07 bash[17480]: audit 2026-03-09T14:30:20.572757+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:20.984 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 0 on host 'vm07' 2026-03-09T14:30:21.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:20 vm11 bash[17885]: cluster 2026-03-09T14:30:18.957166+0000 mgr.y (mgr.14152) 48 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:21.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:20 vm11 bash[17885]: audit 2026-03-09T14:30:20.569135+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:21.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:20 vm11 bash[17885]: audit 2026-03-09T14:30:20.572757+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:21.061 DEBUG:teuthology.orchestra.run.vm07:osd.0> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.0.service 2026-03-09T14:30:21.062 INFO:tasks.cephadm:Deploying osd.1 on vm07 with /dev/vdd... 2026-03-09T14:30:21.062 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- lvm zap /dev/vdd 2026-03-09T14:30:21.661 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:30:21.672 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch daemon add osd vm07:/dev/vdd 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:21 vm07 bash[22585]: audit 2026-03-09T14:30:20.800496+0000 mon.c (mon.1) 5 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/3608472040,v1:192.168.123.107:6803/3608472040]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:21 vm07 bash[22585]: audit 2026-03-09T14:30:20.800755+0000 mon.a (mon.0) 248 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:21 vm07 bash[22585]: audit 2026-03-09T14:30:20.977242+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:21 vm07 bash[22585]: audit 2026-03-09T14:30:20.989238+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:21 vm07 bash[22585]: audit 2026-03-09T14:30:20.990555+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:21 vm07 bash[22585]: audit 2026-03-09T14:30:20.991239+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:21 vm07 bash[17480]: audit 2026-03-09T14:30:20.800496+0000 mon.c (mon.1) 5 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/3608472040,v1:192.168.123.107:6803/3608472040]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:21 vm07 bash[17480]: audit 2026-03-09T14:30:20.800755+0000 mon.a (mon.0) 248 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:21 vm07 bash[17480]: audit 2026-03-09T14:30:20.977242+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:21 vm07 bash[17480]: audit 2026-03-09T14:30:20.989238+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:21.881 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:21 vm07 bash[17480]: audit 2026-03-09T14:30:20.990555+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:21.882 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:21 vm07 bash[17480]: audit 2026-03-09T14:30:20.991239+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:22.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:21 vm11 bash[17885]: audit 2026-03-09T14:30:20.800496+0000 mon.c (mon.1) 5 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/3608472040,v1:192.168.123.107:6803/3608472040]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:30:22.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:21 vm11 bash[17885]: audit 2026-03-09T14:30:20.800755+0000 mon.a (mon.0) 248 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:30:22.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:21 vm11 bash[17885]: audit 2026-03-09T14:30:20.977242+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:22.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:21 vm11 bash[17885]: audit 2026-03-09T14:30:20.989238+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:22.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:21 vm11 bash[17885]: audit 2026-03-09T14:30:20.990555+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:22.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:21 vm11 bash[17885]: audit 2026-03-09T14:30:20.991239+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:22.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:22 vm07 bash[22585]: cluster 2026-03-09T14:30:20.961847+0000 mgr.y (mgr.14152) 49 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:22.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:22 vm07 bash[22585]: audit 2026-03-09T14:30:21.608292+0000 mon.a (mon.0) 253 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:30:22.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:22 vm07 bash[22585]: cluster 2026-03-09T14:30:21.608490+0000 mon.a (mon.0) 254 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:30:22.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:22 vm07 bash[22585]: audit 2026-03-09T14:30:21.608894+0000 mon.c (mon.1) 6 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/3608472040,v1:192.168.123.107:6803/3608472040]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:22.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:22 vm07 bash[22585]: audit 2026-03-09T14:30:21.609862+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:22.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:22 vm07 bash[22585]: audit 2026-03-09T14:30:21.610167+0000 mon.a (mon.0) 256 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:22.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:22 vm07 bash[22585]: audit 2026-03-09T14:30:22.049194+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:22.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:22 vm07 bash[22585]: audit 2026-03-09T14:30:22.050526+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:22.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:22 vm07 bash[22585]: audit 2026-03-09T14:30:22.051142+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:22.918 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:30:22 vm07 bash[25297]: debug 2026-03-09T14:30:22.616+0000 7f52f513a700 -1 osd.0 0 waiting for initial osdmap 2026-03-09T14:30:22.918 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:30:22 vm07 bash[25297]: debug 2026-03-09T14:30:22.620+0000 7f52eeacf700 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:30:22.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:22 vm07 bash[17480]: cluster 2026-03-09T14:30:20.961847+0000 mgr.y (mgr.14152) 49 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:22.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:22 vm07 bash[17480]: audit 2026-03-09T14:30:21.608292+0000 mon.a (mon.0) 253 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:30:22.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:22 vm07 bash[17480]: cluster 2026-03-09T14:30:21.608490+0000 mon.a (mon.0) 254 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:30:22.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:22 vm07 bash[17480]: audit 2026-03-09T14:30:21.608894+0000 mon.c (mon.1) 6 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/3608472040,v1:192.168.123.107:6803/3608472040]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:22.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:22 vm07 bash[17480]: audit 2026-03-09T14:30:21.609862+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:22.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:22 vm07 bash[17480]: audit 2026-03-09T14:30:21.610167+0000 mon.a (mon.0) 256 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:22.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:22 vm07 bash[17480]: audit 2026-03-09T14:30:22.049194+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:22.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:22 vm07 bash[17480]: audit 2026-03-09T14:30:22.050526+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:22.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:22 vm07 bash[17480]: audit 2026-03-09T14:30:22.051142+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:23.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:22 vm11 bash[17885]: cluster 2026-03-09T14:30:20.961847+0000 mgr.y (mgr.14152) 49 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:23.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:22 vm11 bash[17885]: audit 2026-03-09T14:30:21.608292+0000 mon.a (mon.0) 253 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:30:23.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:22 vm11 bash[17885]: cluster 2026-03-09T14:30:21.608490+0000 mon.a (mon.0) 254 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T14:30:23.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:22 vm11 bash[17885]: audit 2026-03-09T14:30:21.608894+0000 mon.c (mon.1) 6 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/3608472040,v1:192.168.123.107:6803/3608472040]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:23.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:22 vm11 bash[17885]: audit 2026-03-09T14:30:21.609862+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:23.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:22 vm11 bash[17885]: audit 2026-03-09T14:30:21.610167+0000 mon.a (mon.0) 256 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:23.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:22 vm11 bash[17885]: audit 2026-03-09T14:30:22.049194+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:23.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:22 vm11 bash[17885]: audit 2026-03-09T14:30:22.050526+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:23.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:22 vm11 bash[17885]: audit 2026-03-09T14:30:22.051142+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:23.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:23 vm07 bash[22585]: audit 2026-03-09T14:30:22.047933+0000 mgr.y (mgr.14152) 50 : audit [DBG] from='client.24121 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:23.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:23 vm07 bash[22585]: audit 2026-03-09T14:30:22.611026+0000 mon.a (mon.0) 260 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:30:23.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:23 vm07 bash[22585]: cluster 2026-03-09T14:30:22.611063+0000 mon.a (mon.0) 261 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:30:23.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:23 vm07 bash[22585]: audit 2026-03-09T14:30:22.612175+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:23.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:23 vm07 bash[22585]: audit 2026-03-09T14:30:22.614752+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:23.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:23 vm07 bash[22585]: cluster 2026-03-09T14:30:22.962099+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:23.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:23 vm07 bash[17480]: audit 2026-03-09T14:30:22.047933+0000 mgr.y (mgr.14152) 50 : audit [DBG] from='client.24121 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:23.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:23 vm07 bash[17480]: audit 2026-03-09T14:30:22.611026+0000 mon.a (mon.0) 260 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:30:23.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:23 vm07 bash[17480]: cluster 2026-03-09T14:30:22.611063+0000 mon.a (mon.0) 261 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:30:23.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:23 vm07 bash[17480]: audit 2026-03-09T14:30:22.612175+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:23.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:23 vm07 bash[17480]: audit 2026-03-09T14:30:22.614752+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:23.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:23 vm07 bash[17480]: cluster 2026-03-09T14:30:22.962099+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:24.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:23 vm11 bash[17885]: audit 2026-03-09T14:30:22.047933+0000 mgr.y (mgr.14152) 50 : audit [DBG] from='client.24121 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:24.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:23 vm11 bash[17885]: audit 2026-03-09T14:30:22.611026+0000 mon.a (mon.0) 260 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:30:24.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:23 vm11 bash[17885]: cluster 2026-03-09T14:30:22.611063+0000 mon.a (mon.0) 261 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T14:30:24.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:23 vm11 bash[17885]: audit 2026-03-09T14:30:22.612175+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:24.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:23 vm11 bash[17885]: audit 2026-03-09T14:30:22.614752+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:24.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:23 vm11 bash[17885]: cluster 2026-03-09T14:30:22.962099+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T14:30:24.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:24 vm07 bash[22585]: cluster 2026-03-09T14:30:21.765772+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:30:24.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:24 vm07 bash[22585]: cluster 2026-03-09T14:30:21.765858+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:30:24.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:24 vm07 bash[22585]: audit 2026-03-09T14:30:23.616137+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:24.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:24 vm07 bash[22585]: cluster 2026-03-09T14:30:23.633825+0000 mon.a (mon.0) 265 : cluster [INF] osd.0 [v2:192.168.123.107:6802/3608472040,v1:192.168.123.107:6803/3608472040] boot 2026-03-09T14:30:24.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:24 vm07 bash[22585]: cluster 2026-03-09T14:30:23.633858+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T14:30:24.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:24 vm07 bash[22585]: audit 2026-03-09T14:30:23.633918+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:24.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:24 vm07 bash[17480]: cluster 2026-03-09T14:30:21.765772+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:30:24.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:24 vm07 bash[17480]: cluster 2026-03-09T14:30:21.765858+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:30:24.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:24 vm07 bash[17480]: audit 2026-03-09T14:30:23.616137+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:24.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:24 vm07 bash[17480]: cluster 2026-03-09T14:30:23.633825+0000 mon.a (mon.0) 265 : cluster [INF] osd.0 [v2:192.168.123.107:6802/3608472040,v1:192.168.123.107:6803/3608472040] boot 2026-03-09T14:30:24.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:24 vm07 bash[17480]: cluster 2026-03-09T14:30:23.633858+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T14:30:24.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:24 vm07 bash[17480]: audit 2026-03-09T14:30:23.633918+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:25.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:24 vm11 bash[17885]: cluster 2026-03-09T14:30:21.765772+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:30:25.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:24 vm11 bash[17885]: cluster 2026-03-09T14:30:21.765858+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:30:25.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:24 vm11 bash[17885]: audit 2026-03-09T14:30:23.616137+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:25.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:24 vm11 bash[17885]: cluster 2026-03-09T14:30:23.633825+0000 mon.a (mon.0) 265 : cluster [INF] osd.0 [v2:192.168.123.107:6802/3608472040,v1:192.168.123.107:6803/3608472040] boot 2026-03-09T14:30:25.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:24 vm11 bash[17885]: cluster 2026-03-09T14:30:23.633858+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T14:30:25.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:24 vm11 bash[17885]: audit 2026-03-09T14:30:23.633918+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:30:25.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:25 vm07 bash[22585]: cluster 2026-03-09T14:30:24.631330+0000 mon.a (mon.0) 268 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T14:30:25.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:25 vm07 bash[22585]: cluster 2026-03-09T14:30:24.962334+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:25.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:25 vm07 bash[22585]: cephadm 2026-03-09T14:30:25.181188+0000 mgr.y (mgr.14152) 53 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:30:25.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:25 vm07 bash[22585]: audit 2026-03-09T14:30:25.187231+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:25.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:25 vm07 bash[22585]: audit 2026-03-09T14:30:25.188591+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:30:25.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:25 vm07 bash[22585]: audit 2026-03-09T14:30:25.192147+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:25.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:25 vm07 bash[17480]: cluster 2026-03-09T14:30:24.631330+0000 mon.a (mon.0) 268 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T14:30:25.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:25 vm07 bash[17480]: cluster 2026-03-09T14:30:24.962334+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:25.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:25 vm07 bash[17480]: cephadm 2026-03-09T14:30:25.181188+0000 mgr.y (mgr.14152) 53 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:30:25.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:25 vm07 bash[17480]: audit 2026-03-09T14:30:25.187231+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:25.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:25 vm07 bash[17480]: audit 2026-03-09T14:30:25.188591+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:30:25.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:25 vm07 bash[17480]: audit 2026-03-09T14:30:25.192147+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:26.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:25 vm11 bash[17885]: cluster 2026-03-09T14:30:24.631330+0000 mon.a (mon.0) 268 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T14:30:26.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:25 vm11 bash[17885]: cluster 2026-03-09T14:30:24.962334+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:26.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:25 vm11 bash[17885]: cephadm 2026-03-09T14:30:25.181188+0000 mgr.y (mgr.14152) 53 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:30:26.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:25 vm11 bash[17885]: audit 2026-03-09T14:30:25.187231+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:26.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:25 vm11 bash[17885]: audit 2026-03-09T14:30:25.188591+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:30:26.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:25 vm11 bash[17885]: audit 2026-03-09T14:30:25.192147+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:26.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:26 vm07 bash[22585]: audit 2026-03-09T14:30:26.148721+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.107:0/4189497847' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c5bcdd68-0c8f-46dc-8a25-561605efa0ff"}]: dispatch 2026-03-09T14:30:26.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:26 vm07 bash[22585]: audit 2026-03-09T14:30:26.149069+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c5bcdd68-0c8f-46dc-8a25-561605efa0ff"}]: dispatch 2026-03-09T14:30:26.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:26 vm07 bash[22585]: audit 2026-03-09T14:30:26.153892+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c5bcdd68-0c8f-46dc-8a25-561605efa0ff"}]': finished 2026-03-09T14:30:26.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:26 vm07 bash[22585]: cluster 2026-03-09T14:30:26.153946+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:30:26.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:26 vm07 bash[22585]: audit 2026-03-09T14:30:26.153994+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:26.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:26 vm07 bash[17480]: audit 2026-03-09T14:30:26.148721+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.107:0/4189497847' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c5bcdd68-0c8f-46dc-8a25-561605efa0ff"}]: dispatch 2026-03-09T14:30:26.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:26 vm07 bash[17480]: audit 2026-03-09T14:30:26.149069+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c5bcdd68-0c8f-46dc-8a25-561605efa0ff"}]: dispatch 2026-03-09T14:30:26.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:26 vm07 bash[17480]: audit 2026-03-09T14:30:26.153892+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c5bcdd68-0c8f-46dc-8a25-561605efa0ff"}]': finished 2026-03-09T14:30:26.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:26 vm07 bash[17480]: cluster 2026-03-09T14:30:26.153946+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:30:26.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:26 vm07 bash[17480]: audit 2026-03-09T14:30:26.153994+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:27.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:26 vm11 bash[17885]: audit 2026-03-09T14:30:26.148721+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.107:0/4189497847' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c5bcdd68-0c8f-46dc-8a25-561605efa0ff"}]: dispatch 2026-03-09T14:30:27.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:26 vm11 bash[17885]: audit 2026-03-09T14:30:26.149069+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c5bcdd68-0c8f-46dc-8a25-561605efa0ff"}]: dispatch 2026-03-09T14:30:27.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:26 vm11 bash[17885]: audit 2026-03-09T14:30:26.153892+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c5bcdd68-0c8f-46dc-8a25-561605efa0ff"}]': finished 2026-03-09T14:30:27.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:26 vm11 bash[17885]: cluster 2026-03-09T14:30:26.153946+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T14:30:27.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:26 vm11 bash[17885]: audit 2026-03-09T14:30:26.153994+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:27.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:27 vm07 bash[22585]: audit 2026-03-09T14:30:26.771243+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.107:0/355801239' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:27.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:27 vm07 bash[22585]: cluster 2026-03-09T14:30:26.962582+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:27.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:27 vm07 bash[17480]: audit 2026-03-09T14:30:26.771243+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.107:0/355801239' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:27.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:27 vm07 bash[17480]: cluster 2026-03-09T14:30:26.962582+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:27 vm11 bash[17885]: audit 2026-03-09T14:30:26.771243+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.107:0/355801239' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:27 vm11 bash[17885]: cluster 2026-03-09T14:30:26.962582+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:30.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:30 vm07 bash[17480]: cluster 2026-03-09T14:30:28.962968+0000 mgr.y (mgr.14152) 55 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:30.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:30 vm07 bash[22585]: cluster 2026-03-09T14:30:28.962968+0000 mgr.y (mgr.14152) 55 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:30.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:30 vm11 bash[17885]: cluster 2026-03-09T14:30:28.962968+0000 mgr.y (mgr.14152) 55 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:32.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:32 vm07 bash[22585]: cluster 2026-03-09T14:30:30.963250+0000 mgr.y (mgr.14152) 56 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:32.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:32 vm07 bash[22585]: audit 2026-03-09T14:30:32.169642+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:30:32.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:32 vm07 bash[22585]: audit 2026-03-09T14:30:32.170115+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:32.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:32 vm07 bash[17480]: cluster 2026-03-09T14:30:30.963250+0000 mgr.y (mgr.14152) 56 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:32.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:32 vm07 bash[17480]: audit 2026-03-09T14:30:32.169642+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:30:32.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:32 vm07 bash[17480]: audit 2026-03-09T14:30:32.170115+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:32.954 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:32.954 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:32.954 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:30:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:32.954 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:30:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:32.954 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:32.954 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:32.954 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:30:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:32.954 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:30:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:33.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:32 vm11 bash[17885]: cluster 2026-03-09T14:30:30.963250+0000 mgr.y (mgr.14152) 56 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:33.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:32 vm11 bash[17885]: audit 2026-03-09T14:30:32.169642+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:30:33.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:32 vm11 bash[17885]: audit 2026-03-09T14:30:32.170115+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:33.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:33 vm07 bash[22585]: cephadm 2026-03-09T14:30:32.170451+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T14:30:33.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:33 vm07 bash[22585]: audit 2026-03-09T14:30:32.956016+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:33.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:33 vm07 bash[22585]: audit 2026-03-09T14:30:32.969745+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:33.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:33 vm07 bash[22585]: audit 2026-03-09T14:30:32.972117+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:33.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:33 vm07 bash[22585]: audit 2026-03-09T14:30:32.973997+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:33 vm07 bash[17480]: cephadm 2026-03-09T14:30:32.170451+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T14:30:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:33 vm07 bash[17480]: audit 2026-03-09T14:30:32.956016+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:33 vm07 bash[17480]: audit 2026-03-09T14:30:32.969745+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:33 vm07 bash[17480]: audit 2026-03-09T14:30:32.972117+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:33 vm07 bash[17480]: audit 2026-03-09T14:30:32.973997+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:34.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:33 vm11 bash[17885]: cephadm 2026-03-09T14:30:32.170451+0000 mgr.y (mgr.14152) 57 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T14:30:34.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:33 vm11 bash[17885]: audit 2026-03-09T14:30:32.956016+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:34.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:33 vm11 bash[17885]: audit 2026-03-09T14:30:32.969745+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:34.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:33 vm11 bash[17885]: audit 2026-03-09T14:30:32.972117+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:34.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:33 vm11 bash[17885]: audit 2026-03-09T14:30:32.973997+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:34.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:34 vm07 bash[22585]: cluster 2026-03-09T14:30:32.964550+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:34.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:34 vm07 bash[17480]: cluster 2026-03-09T14:30:32.964550+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:35.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:34 vm11 bash[17885]: cluster 2026-03-09T14:30:32.964550+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:36.301 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 1 on host 'vm07' 2026-03-09T14:30:36.363 DEBUG:teuthology.orchestra.run.vm07:osd.1> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.1.service 2026-03-09T14:30:36.363 INFO:tasks.cephadm:Deploying osd.2 on vm07 with /dev/vdc... 2026-03-09T14:30:36.364 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- lvm zap /dev/vdc 2026-03-09T14:30:36.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:36 vm07 bash[22585]: cluster 2026-03-09T14:30:34.964809+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:36.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:36 vm07 bash[22585]: audit 2026-03-09T14:30:35.923419+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:36.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:36 vm07 bash[22585]: audit 2026-03-09T14:30:35.929238+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:36.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:36 vm07 bash[22585]: audit 2026-03-09T14:30:36.091824+0000 mon.a (mon.0) 284 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:30:36.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:36 vm07 bash[22585]: audit 2026-03-09T14:30:36.291875+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:36.627 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:36 vm07 bash[22585]: audit 2026-03-09T14:30:36.295607+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:36.628 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:36 vm07 bash[22585]: audit 2026-03-09T14:30:36.297954+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:36.628 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:36 vm07 bash[22585]: audit 2026-03-09T14:30:36.298356+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:36.628 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:36 vm07 bash[17480]: cluster 2026-03-09T14:30:34.964809+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:36.628 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:36 vm07 bash[17480]: audit 2026-03-09T14:30:35.923419+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:36.628 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:36 vm07 bash[17480]: audit 2026-03-09T14:30:35.929238+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:36.628 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:36 vm07 bash[17480]: audit 2026-03-09T14:30:36.091824+0000 mon.a (mon.0) 284 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:30:36.628 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:36 vm07 bash[17480]: audit 2026-03-09T14:30:36.291875+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:36.628 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:36 vm07 bash[17480]: audit 2026-03-09T14:30:36.295607+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:36.628 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:36 vm07 bash[17480]: audit 2026-03-09T14:30:36.297954+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:36.628 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:36 vm07 bash[17480]: audit 2026-03-09T14:30:36.298356+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:36.962 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:30:36.972 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch daemon add osd vm07:/dev/vdc 2026-03-09T14:30:37.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:36 vm11 bash[17885]: cluster 2026-03-09T14:30:34.964809+0000 mgr.y (mgr.14152) 59 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:37.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:36 vm11 bash[17885]: audit 2026-03-09T14:30:35.923419+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:37.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:36 vm11 bash[17885]: audit 2026-03-09T14:30:35.929238+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:37.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:36 vm11 bash[17885]: audit 2026-03-09T14:30:36.091824+0000 mon.a (mon.0) 284 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:30:37.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:36 vm11 bash[17885]: audit 2026-03-09T14:30:36.291875+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:37.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:36 vm11 bash[17885]: audit 2026-03-09T14:30:36.295607+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:37.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:36 vm11 bash[17885]: audit 2026-03-09T14:30:36.297954+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:37.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:36 vm11 bash[17885]: audit 2026-03-09T14:30:36.298356+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:38.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:37 vm11 bash[17885]: audit 2026-03-09T14:30:36.936098+0000 mon.a (mon.0) 289 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:30:38.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:37 vm11 bash[17885]: cluster 2026-03-09T14:30:36.936152+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:30:38.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:37 vm11 bash[17885]: audit 2026-03-09T14:30:36.936520+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:38.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:37 vm11 bash[17885]: audit 2026-03-09T14:30:36.938213+0000 mon.a (mon.0) 292 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:38.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:37 vm11 bash[17885]: cluster 2026-03-09T14:30:36.965053+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:38.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:37 vm11 bash[17885]: audit 2026-03-09T14:30:37.360333+0000 mgr.y (mgr.14152) 61 : audit [DBG] from='client.24148 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:38.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:37 vm11 bash[17885]: audit 2026-03-09T14:30:37.361862+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:38.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:37 vm11 bash[17885]: audit 2026-03-09T14:30:37.363045+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:38.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:37 vm11 bash[17885]: audit 2026-03-09T14:30:37.363383+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:38.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:37 vm07 bash[22585]: audit 2026-03-09T14:30:36.936098+0000 mon.a (mon.0) 289 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:37 vm07 bash[22585]: cluster 2026-03-09T14:30:36.936152+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:37 vm07 bash[22585]: audit 2026-03-09T14:30:36.936520+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:37 vm07 bash[22585]: audit 2026-03-09T14:30:36.938213+0000 mon.a (mon.0) 292 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:37 vm07 bash[22585]: cluster 2026-03-09T14:30:36.965053+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:37 vm07 bash[22585]: audit 2026-03-09T14:30:37.360333+0000 mgr.y (mgr.14152) 61 : audit [DBG] from='client.24148 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:37 vm07 bash[22585]: audit 2026-03-09T14:30:37.361862+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:37 vm07 bash[22585]: audit 2026-03-09T14:30:37.363045+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:37 vm07 bash[22585]: audit 2026-03-09T14:30:37.363383+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:37 vm07 bash[17480]: audit 2026-03-09T14:30:36.936098+0000 mon.a (mon.0) 289 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:37 vm07 bash[17480]: cluster 2026-03-09T14:30:36.936152+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:37 vm07 bash[17480]: audit 2026-03-09T14:30:36.936520+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:37 vm07 bash[17480]: audit 2026-03-09T14:30:36.938213+0000 mon.a (mon.0) 292 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:37 vm07 bash[17480]: cluster 2026-03-09T14:30:36.965053+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:37 vm07 bash[17480]: audit 2026-03-09T14:30:37.360333+0000 mgr.y (mgr.14152) 61 : audit [DBG] from='client.24148 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:37 vm07 bash[17480]: audit 2026-03-09T14:30:37.361862+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:37 vm07 bash[17480]: audit 2026-03-09T14:30:37.363045+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:37 vm07 bash[17480]: audit 2026-03-09T14:30:37.363383+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:38.418 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:30:37 vm07 bash[28423]: debug 2026-03-09T14:30:37.940+0000 7f5468b07700 -1 osd.1 0 waiting for initial osdmap 2026-03-09T14:30:38.418 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:30:37 vm07 bash[28423]: debug 2026-03-09T14:30:37.944+0000 7f5463c9f700 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:30:39.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:38 vm11 bash[17885]: audit 2026-03-09T14:30:37.938534+0000 mon.a (mon.0) 296 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:30:39.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:38 vm11 bash[17885]: cluster 2026-03-09T14:30:37.938599+0000 mon.a (mon.0) 297 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:30:39.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:38 vm11 bash[17885]: audit 2026-03-09T14:30:37.942880+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:39.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:38 vm07 bash[22585]: audit 2026-03-09T14:30:37.938534+0000 mon.a (mon.0) 296 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:30:39.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:38 vm07 bash[22585]: cluster 2026-03-09T14:30:37.938599+0000 mon.a (mon.0) 297 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:30:39.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:38 vm07 bash[22585]: audit 2026-03-09T14:30:37.942880+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:39.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:38 vm07 bash[17480]: audit 2026-03-09T14:30:37.938534+0000 mon.a (mon.0) 296 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:30:39.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:38 vm07 bash[17480]: cluster 2026-03-09T14:30:37.938599+0000 mon.a (mon.0) 297 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T14:30:39.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:38 vm07 bash[17480]: audit 2026-03-09T14:30:37.942880+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:40.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:39 vm11 bash[17885]: cluster 2026-03-09T14:30:37.134006+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:30:40.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:39 vm11 bash[17885]: cluster 2026-03-09T14:30:37.134083+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:30:40.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:39 vm11 bash[17885]: audit 2026-03-09T14:30:38.945987+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:40.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:39 vm11 bash[17885]: cluster 2026-03-09T14:30:38.951526+0000 mon.a (mon.0) 300 : cluster [INF] osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614] boot 2026-03-09T14:30:40.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:39 vm11 bash[17885]: cluster 2026-03-09T14:30:38.951548+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T14:30:40.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:39 vm11 bash[17885]: audit 2026-03-09T14:30:38.954158+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:40.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:39 vm11 bash[17885]: cluster 2026-03-09T14:30:38.965255+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:40.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:39 vm07 bash[22585]: cluster 2026-03-09T14:30:37.134006+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:30:40.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:39 vm07 bash[22585]: cluster 2026-03-09T14:30:37.134083+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:30:40.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:39 vm07 bash[22585]: audit 2026-03-09T14:30:38.945987+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:40.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:39 vm07 bash[22585]: cluster 2026-03-09T14:30:38.951526+0000 mon.a (mon.0) 300 : cluster [INF] osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614] boot 2026-03-09T14:30:40.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:39 vm07 bash[22585]: cluster 2026-03-09T14:30:38.951548+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T14:30:40.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:39 vm07 bash[22585]: audit 2026-03-09T14:30:38.954158+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:40.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:39 vm07 bash[22585]: cluster 2026-03-09T14:30:38.965255+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:40.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:39 vm07 bash[17480]: cluster 2026-03-09T14:30:37.134006+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:30:40.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:39 vm07 bash[17480]: cluster 2026-03-09T14:30:37.134083+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:30:40.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:39 vm07 bash[17480]: audit 2026-03-09T14:30:38.945987+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:40.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:39 vm07 bash[17480]: cluster 2026-03-09T14:30:38.951526+0000 mon.a (mon.0) 300 : cluster [INF] osd.1 [v2:192.168.123.107:6810/2809750614,v1:192.168.123.107:6811/2809750614] boot 2026-03-09T14:30:40.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:39 vm07 bash[17480]: cluster 2026-03-09T14:30:38.951548+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T14:30:40.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:39 vm07 bash[17480]: audit 2026-03-09T14:30:38.954158+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:30:40.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:39 vm07 bash[17480]: cluster 2026-03-09T14:30:38.965255+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-09T14:30:41.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:40 vm11 bash[17885]: cluster 2026-03-09T14:30:39.971359+0000 mon.a (mon.0) 303 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T14:30:41.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:40 vm11 bash[17885]: cephadm 2026-03-09T14:30:40.552577+0000 mgr.y (mgr.14152) 63 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:30:41.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:40 vm11 bash[17885]: audit 2026-03-09T14:30:40.558802+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:41.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:40 vm11 bash[17885]: audit 2026-03-09T14:30:40.560060+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:30:41.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:40 vm11 bash[17885]: audit 2026-03-09T14:30:40.564277+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:41.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:40 vm07 bash[22585]: cluster 2026-03-09T14:30:39.971359+0000 mon.a (mon.0) 303 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T14:30:41.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:40 vm07 bash[22585]: cephadm 2026-03-09T14:30:40.552577+0000 mgr.y (mgr.14152) 63 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:30:41.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:40 vm07 bash[22585]: audit 2026-03-09T14:30:40.558802+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:41.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:40 vm07 bash[22585]: audit 2026-03-09T14:30:40.560060+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:30:41.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:40 vm07 bash[22585]: audit 2026-03-09T14:30:40.564277+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:41.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:40 vm07 bash[17480]: cluster 2026-03-09T14:30:39.971359+0000 mon.a (mon.0) 303 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T14:30:41.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:40 vm07 bash[17480]: cephadm 2026-03-09T14:30:40.552577+0000 mgr.y (mgr.14152) 63 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:30:41.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:40 vm07 bash[17480]: audit 2026-03-09T14:30:40.558802+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:41.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:40 vm07 bash[17480]: audit 2026-03-09T14:30:40.560060+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:30:41.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:40 vm07 bash[17480]: audit 2026-03-09T14:30:40.564277+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:42.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:41 vm11 bash[17885]: cluster 2026-03-09T14:30:40.965519+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:42.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:41 vm11 bash[17885]: audit 2026-03-09T14:30:41.476441+0000 mon.c (mon.1) 9 : audit [INF] from='client.? 192.168.123.107:0/1668565907' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6878f209-d828-467d-8a66-6cca096732a5"}]: dispatch 2026-03-09T14:30:42.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:41 vm11 bash[17885]: audit 2026-03-09T14:30:41.476983+0000 mon.a (mon.0) 307 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6878f209-d828-467d-8a66-6cca096732a5"}]: dispatch 2026-03-09T14:30:42.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:41 vm11 bash[17885]: audit 2026-03-09T14:30:41.483562+0000 mon.a (mon.0) 308 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6878f209-d828-467d-8a66-6cca096732a5"}]': finished 2026-03-09T14:30:42.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:41 vm11 bash[17885]: cluster 2026-03-09T14:30:41.483655+0000 mon.a (mon.0) 309 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:30:42.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:41 vm11 bash[17885]: audit 2026-03-09T14:30:41.483941+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:42.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:41 vm07 bash[17480]: cluster 2026-03-09T14:30:40.965519+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:41 vm07 bash[17480]: audit 2026-03-09T14:30:41.476441+0000 mon.c (mon.1) 9 : audit [INF] from='client.? 192.168.123.107:0/1668565907' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6878f209-d828-467d-8a66-6cca096732a5"}]: dispatch 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:41 vm07 bash[17480]: audit 2026-03-09T14:30:41.476983+0000 mon.a (mon.0) 307 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6878f209-d828-467d-8a66-6cca096732a5"}]: dispatch 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:41 vm07 bash[17480]: audit 2026-03-09T14:30:41.483562+0000 mon.a (mon.0) 308 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6878f209-d828-467d-8a66-6cca096732a5"}]': finished 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:41 vm07 bash[17480]: cluster 2026-03-09T14:30:41.483655+0000 mon.a (mon.0) 309 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:41 vm07 bash[17480]: audit 2026-03-09T14:30:41.483941+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:41 vm07 bash[22585]: cluster 2026-03-09T14:30:40.965519+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:41 vm07 bash[22585]: audit 2026-03-09T14:30:41.476441+0000 mon.c (mon.1) 9 : audit [INF] from='client.? 192.168.123.107:0/1668565907' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6878f209-d828-467d-8a66-6cca096732a5"}]: dispatch 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:41 vm07 bash[22585]: audit 2026-03-09T14:30:41.476983+0000 mon.a (mon.0) 307 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6878f209-d828-467d-8a66-6cca096732a5"}]: dispatch 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:41 vm07 bash[22585]: audit 2026-03-09T14:30:41.483562+0000 mon.a (mon.0) 308 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6878f209-d828-467d-8a66-6cca096732a5"}]': finished 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:41 vm07 bash[22585]: cluster 2026-03-09T14:30:41.483655+0000 mon.a (mon.0) 309 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T14:30:42.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:41 vm07 bash[22585]: audit 2026-03-09T14:30:41.483941+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:43.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:42 vm11 bash[17885]: audit 2026-03-09T14:30:42.095219+0000 mon.c (mon.1) 10 : audit [DBG] from='client.? 192.168.123.107:0/4290808191' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:43.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:42 vm11 bash[17885]: audit 2026-03-09T14:30:42.323349+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:30:43.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:42 vm11 bash[17885]: audit 2026-03-09T14:30:42.326684+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:30:43.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:42 vm07 bash[17480]: audit 2026-03-09T14:30:42.095219+0000 mon.c (mon.1) 10 : audit [DBG] from='client.? 192.168.123.107:0/4290808191' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:43.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:42 vm07 bash[17480]: audit 2026-03-09T14:30:42.323349+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:30:43.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:42 vm07 bash[17480]: audit 2026-03-09T14:30:42.326684+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:30:43.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:42 vm07 bash[22585]: audit 2026-03-09T14:30:42.095219+0000 mon.c (mon.1) 10 : audit [DBG] from='client.? 192.168.123.107:0/4290808191' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:43.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:42 vm07 bash[22585]: audit 2026-03-09T14:30:42.323349+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:30:43.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:42 vm07 bash[22585]: audit 2026-03-09T14:30:42.326684+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:30:44.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:43 vm11 bash[17885]: cluster 2026-03-09T14:30:42.965991+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:44.363 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:43 vm07 bash[17480]: cluster 2026-03-09T14:30:42.965991+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:44.363 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:43 vm07 bash[22585]: cluster 2026-03-09T14:30:42.965991+0000 mgr.y (mgr.14152) 65 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:46.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:46 vm07 bash[17480]: cluster 2026-03-09T14:30:44.966249+0000 mgr.y (mgr.14152) 66 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:46.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:46 vm07 bash[22585]: cluster 2026-03-09T14:30:44.966249+0000 mgr.y (mgr.14152) 66 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:47.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:46 vm11 bash[17885]: cluster 2026-03-09T14:30:44.966249+0000 mgr.y (mgr.14152) 66 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:47.759 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:47 vm07 bash[17480]: audit 2026-03-09T14:30:47.552081+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:30:47.759 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:47 vm07 bash[17480]: audit 2026-03-09T14:30:47.552567+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:47.759 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:47 vm07 bash[22585]: audit 2026-03-09T14:30:47.552081+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:30:47.759 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:47 vm07 bash[22585]: audit 2026-03-09T14:30:47.552567+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:48.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:47 vm11 bash[17885]: audit 2026-03-09T14:30:47.552081+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:30:48.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:47 vm11 bash[17885]: audit 2026-03-09T14:30:47.552567+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:48.343 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:48.343 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:48.343 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:30:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:48.343 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:30:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:48.343 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:48.343 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:48.343 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:30:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:48.343 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:30:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:48.343 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:30:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:48.343 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:30:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:30:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:48 vm07 bash[17480]: cluster 2026-03-09T14:30:46.966508+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:48.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:48 vm07 bash[17480]: cephadm 2026-03-09T14:30:47.552980+0000 mgr.y (mgr.14152) 68 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T14:30:48.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:48 vm07 bash[17480]: audit 2026-03-09T14:30:48.359438+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:48.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:48 vm07 bash[17480]: audit 2026-03-09T14:30:48.395508+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:48.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:48 vm07 bash[17480]: audit 2026-03-09T14:30:48.396684+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:48.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:48 vm07 bash[17480]: audit 2026-03-09T14:30:48.397751+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:48 vm07 bash[22585]: cluster 2026-03-09T14:30:46.966508+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:48 vm07 bash[22585]: cephadm 2026-03-09T14:30:47.552980+0000 mgr.y (mgr.14152) 68 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T14:30:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:48 vm07 bash[22585]: audit 2026-03-09T14:30:48.359438+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:48 vm07 bash[22585]: audit 2026-03-09T14:30:48.395508+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:48 vm07 bash[22585]: audit 2026-03-09T14:30:48.396684+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:48 vm07 bash[22585]: audit 2026-03-09T14:30:48.397751+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:49.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:48 vm11 bash[17885]: cluster 2026-03-09T14:30:46.966508+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:49.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:48 vm11 bash[17885]: cephadm 2026-03-09T14:30:47.552980+0000 mgr.y (mgr.14152) 68 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T14:30:49.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:48 vm11 bash[17885]: audit 2026-03-09T14:30:48.359438+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:49.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:48 vm11 bash[17885]: audit 2026-03-09T14:30:48.395508+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:49.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:48 vm11 bash[17885]: audit 2026-03-09T14:30:48.396684+0000 mon.a (mon.0) 317 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:49.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:48 vm11 bash[17885]: audit 2026-03-09T14:30:48.397751+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:50.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:50 vm07 bash[22585]: cluster 2026-03-09T14:30:48.966724+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:50.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:50 vm07 bash[17480]: cluster 2026-03-09T14:30:48.966724+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:51.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:50 vm11 bash[17885]: cluster 2026-03-09T14:30:48.966724+0000 mgr.y (mgr.14152) 69 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:51.704 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 2 on host 'vm07' 2026-03-09T14:30:51.750 DEBUG:teuthology.orchestra.run.vm07:osd.2> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.2.service 2026-03-09T14:30:51.750 INFO:tasks.cephadm:Deploying osd.3 on vm07 with /dev/vdb... 2026-03-09T14:30:51.750 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- lvm zap /dev/vdb 2026-03-09T14:30:51.863 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:51 vm11 bash[17885]: audit 2026-03-09T14:30:51.253860+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2936867491,v1:192.168.123.107:6819/2936867491]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:30:51.863 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:51 vm11 bash[17885]: audit 2026-03-09T14:30:51.254495+0000 mon.a (mon.0) 319 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:30:51.863 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:51 vm11 bash[17885]: audit 2026-03-09T14:30:51.317046+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:51.863 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:51 vm11 bash[17885]: audit 2026-03-09T14:30:51.458856+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:51.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:51 vm07 bash[22585]: audit 2026-03-09T14:30:51.253860+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2936867491,v1:192.168.123.107:6819/2936867491]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:30:51.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:51 vm07 bash[22585]: audit 2026-03-09T14:30:51.254495+0000 mon.a (mon.0) 319 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:30:51.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:51 vm07 bash[22585]: audit 2026-03-09T14:30:51.317046+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:51.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:51 vm07 bash[22585]: audit 2026-03-09T14:30:51.458856+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:51.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:51 vm07 bash[17480]: audit 2026-03-09T14:30:51.253860+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2936867491,v1:192.168.123.107:6819/2936867491]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:30:51.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:51 vm07 bash[17480]: audit 2026-03-09T14:30:51.254495+0000 mon.a (mon.0) 319 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:30:51.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:51 vm07 bash[17480]: audit 2026-03-09T14:30:51.317046+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:51.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:51 vm07 bash[17480]: audit 2026-03-09T14:30:51.458856+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:52.337 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:30:52.346 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch daemon add osd vm07:/dev/vdb 2026-03-09T14:30:52.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:52 vm07 bash[17480]: cluster 2026-03-09T14:30:50.966928+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:52 vm07 bash[17480]: audit 2026-03-09T14:30:51.612124+0000 mon.a (mon.0) 322 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:52 vm07 bash[17480]: cluster 2026-03-09T14:30:51.612206+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:52 vm07 bash[17480]: audit 2026-03-09T14:30:51.612328+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:52 vm07 bash[17480]: audit 2026-03-09T14:30:51.613059+0000 mon.c (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2936867491,v1:192.168.123.107:6819/2936867491]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:52 vm07 bash[17480]: audit 2026-03-09T14:30:51.613347+0000 mon.a (mon.0) 325 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:52 vm07 bash[17480]: audit 2026-03-09T14:30:51.697755+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:52 vm07 bash[17480]: audit 2026-03-09T14:30:51.724528+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:52 vm07 bash[17480]: audit 2026-03-09T14:30:51.725404+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:52 vm07 bash[17480]: audit 2026-03-09T14:30:51.726503+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:30:52 vm07 bash[31564]: debug 2026-03-09T14:30:52.616+0000 7f289befb700 -1 osd.2 0 waiting for initial osdmap 2026-03-09T14:30:52.918 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:30:52 vm07 bash[31564]: debug 2026-03-09T14:30:52.620+0000 7f289488e700 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:52 vm07 bash[22585]: cluster 2026-03-09T14:30:50.966928+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:52 vm07 bash[22585]: audit 2026-03-09T14:30:51.612124+0000 mon.a (mon.0) 322 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:52 vm07 bash[22585]: cluster 2026-03-09T14:30:51.612206+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:52 vm07 bash[22585]: audit 2026-03-09T14:30:51.612328+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:52 vm07 bash[22585]: audit 2026-03-09T14:30:51.613059+0000 mon.c (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2936867491,v1:192.168.123.107:6819/2936867491]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:52 vm07 bash[22585]: audit 2026-03-09T14:30:51.613347+0000 mon.a (mon.0) 325 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:52 vm07 bash[22585]: audit 2026-03-09T14:30:51.697755+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:52 vm07 bash[22585]: audit 2026-03-09T14:30:51.724528+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:52 vm07 bash[22585]: audit 2026-03-09T14:30:51.725404+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:52.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:52 vm07 bash[22585]: audit 2026-03-09T14:30:51.726503+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:53.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:52 vm11 bash[17885]: cluster 2026-03-09T14:30:50.966928+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:53.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:52 vm11 bash[17885]: audit 2026-03-09T14:30:51.612124+0000 mon.a (mon.0) 322 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:30:53.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:52 vm11 bash[17885]: cluster 2026-03-09T14:30:51.612206+0000 mon.a (mon.0) 323 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T14:30:53.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:52 vm11 bash[17885]: audit 2026-03-09T14:30:51.612328+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:53.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:52 vm11 bash[17885]: audit 2026-03-09T14:30:51.613059+0000 mon.c (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2936867491,v1:192.168.123.107:6819/2936867491]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:53.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:52 vm11 bash[17885]: audit 2026-03-09T14:30:51.613347+0000 mon.a (mon.0) 325 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:30:53.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:52 vm11 bash[17885]: audit 2026-03-09T14:30:51.697755+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:53.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:52 vm11 bash[17885]: audit 2026-03-09T14:30:51.724528+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:30:53.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:52 vm11 bash[17885]: audit 2026-03-09T14:30:51.725404+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:53.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:52 vm11 bash[17885]: audit 2026-03-09T14:30:51.726503+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:30:53.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:53 vm07 bash[22585]: audit 2026-03-09T14:30:52.616776+0000 mon.a (mon.0) 330 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:30:53.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:53 vm07 bash[22585]: cluster 2026-03-09T14:30:52.616888+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:53 vm07 bash[22585]: audit 2026-03-09T14:30:52.617897+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:53 vm07 bash[22585]: audit 2026-03-09T14:30:52.619519+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:53 vm07 bash[22585]: audit 2026-03-09T14:30:52.731137+0000 mgr.y (mgr.14152) 71 : audit [DBG] from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:53 vm07 bash[22585]: audit 2026-03-09T14:30:52.732390+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:53 vm07 bash[22585]: audit 2026-03-09T14:30:52.733691+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:53 vm07 bash[22585]: audit 2026-03-09T14:30:52.734101+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:53 vm07 bash[22585]: cluster 2026-03-09T14:30:52.967147+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:53 vm07 bash[22585]: audit 2026-03-09T14:30:53.619539+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:53 vm07 bash[17480]: audit 2026-03-09T14:30:52.616776+0000 mon.a (mon.0) 330 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:53 vm07 bash[17480]: cluster 2026-03-09T14:30:52.616888+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:53 vm07 bash[17480]: audit 2026-03-09T14:30:52.617897+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:53 vm07 bash[17480]: audit 2026-03-09T14:30:52.619519+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:53 vm07 bash[17480]: audit 2026-03-09T14:30:52.731137+0000 mgr.y (mgr.14152) 71 : audit [DBG] from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:53 vm07 bash[17480]: audit 2026-03-09T14:30:52.732390+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:53 vm07 bash[17480]: audit 2026-03-09T14:30:52.733691+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:53 vm07 bash[17480]: audit 2026-03-09T14:30:52.734101+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:53 vm07 bash[17480]: cluster 2026-03-09T14:30:52.967147+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:53.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:53 vm07 bash[17480]: audit 2026-03-09T14:30:53.619539+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:54.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:53 vm11 bash[17885]: audit 2026-03-09T14:30:52.616776+0000 mon.a (mon.0) 330 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:30:54.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:53 vm11 bash[17885]: cluster 2026-03-09T14:30:52.616888+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T14:30:54.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:53 vm11 bash[17885]: audit 2026-03-09T14:30:52.617897+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:54.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:53 vm11 bash[17885]: audit 2026-03-09T14:30:52.619519+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:54.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:53 vm11 bash[17885]: audit 2026-03-09T14:30:52.731137+0000 mgr.y (mgr.14152) 71 : audit [DBG] from='client.24169 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:30:54.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:53 vm11 bash[17885]: audit 2026-03-09T14:30:52.732390+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:30:54.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:53 vm11 bash[17885]: audit 2026-03-09T14:30:52.733691+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:30:54.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:53 vm11 bash[17885]: audit 2026-03-09T14:30:52.734101+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:30:54.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:53 vm11 bash[17885]: cluster 2026-03-09T14:30:52.967147+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-09T14:30:54.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:53 vm11 bash[17885]: audit 2026-03-09T14:30:53.619539+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:54.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:54 vm07 bash[22585]: cluster 2026-03-09T14:30:52.232560+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:30:54.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:54 vm07 bash[22585]: cluster 2026-03-09T14:30:52.232646+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:30:54.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:54 vm07 bash[22585]: cluster 2026-03-09T14:30:53.628669+0000 mon.a (mon.0) 338 : cluster [INF] osd.2 [v2:192.168.123.107:6818/2936867491,v1:192.168.123.107:6819/2936867491] boot 2026-03-09T14:30:54.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:54 vm07 bash[22585]: cluster 2026-03-09T14:30:53.628822+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T14:30:54.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:54 vm07 bash[22585]: audit 2026-03-09T14:30:53.629793+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:54.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:54 vm07 bash[22585]: audit 2026-03-09T14:30:54.320958+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-09T14:30:54.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:54 vm07 bash[17480]: cluster 2026-03-09T14:30:52.232560+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:30:54.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:54 vm07 bash[17480]: cluster 2026-03-09T14:30:52.232646+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:30:54.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:54 vm07 bash[17480]: cluster 2026-03-09T14:30:53.628669+0000 mon.a (mon.0) 338 : cluster [INF] osd.2 [v2:192.168.123.107:6818/2936867491,v1:192.168.123.107:6819/2936867491] boot 2026-03-09T14:30:54.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:54 vm07 bash[17480]: cluster 2026-03-09T14:30:53.628822+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T14:30:54.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:54 vm07 bash[17480]: audit 2026-03-09T14:30:53.629793+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:54.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:54 vm07 bash[17480]: audit 2026-03-09T14:30:54.320958+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-09T14:30:55.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:54 vm11 bash[17885]: cluster 2026-03-09T14:30:52.232560+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:30:55.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:54 vm11 bash[17885]: cluster 2026-03-09T14:30:52.232646+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:30:55.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:54 vm11 bash[17885]: cluster 2026-03-09T14:30:53.628669+0000 mon.a (mon.0) 338 : cluster [INF] osd.2 [v2:192.168.123.107:6818/2936867491,v1:192.168.123.107:6819/2936867491] boot 2026-03-09T14:30:55.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:54 vm11 bash[17885]: cluster 2026-03-09T14:30:53.628822+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T14:30:55.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:54 vm11 bash[17885]: audit 2026-03-09T14:30:53.629793+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:30:55.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:54 vm11 bash[17885]: audit 2026-03-09T14:30:54.320958+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-09T14:30:55.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:55 vm07 bash[22585]: audit 2026-03-09T14:30:54.643591+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-09T14:30:55.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:55 vm07 bash[22585]: cluster 2026-03-09T14:30:54.643643+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T14:30:55.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:55 vm07 bash[22585]: audit 2026-03-09T14:30:54.645450+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:30:55.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:55 vm07 bash[22585]: cluster 2026-03-09T14:30:54.967388+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v45: 1 pgs: 1 creating+peering; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:30:55.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:55 vm07 bash[17480]: audit 2026-03-09T14:30:54.643591+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-09T14:30:55.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:55 vm07 bash[17480]: cluster 2026-03-09T14:30:54.643643+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T14:30:55.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:55 vm07 bash[17480]: audit 2026-03-09T14:30:54.645450+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:30:55.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:55 vm07 bash[17480]: cluster 2026-03-09T14:30:54.967388+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v45: 1 pgs: 1 creating+peering; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:30:56.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:55 vm11 bash[17885]: audit 2026-03-09T14:30:54.643591+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-09T14:30:56.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:55 vm11 bash[17885]: cluster 2026-03-09T14:30:54.643643+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T14:30:56.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:55 vm11 bash[17885]: audit 2026-03-09T14:30:54.645450+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:30:56.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:55 vm11 bash[17885]: cluster 2026-03-09T14:30:54.967388+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v45: 1 pgs: 1 creating+peering; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:30:57.048 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:55.662481+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:30:57.048 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: cluster 2026-03-09T14:30:55.662657+0000 mon.a (mon.0) 346 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T14:30:57.048 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: cephadm 2026-03-09T14:30:56.157230+0000 mgr.y (mgr.14152) 74 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:30:57.048 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.182637+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.187586+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.193637+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.482807+0000 mon.a (mon.0) 350 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.636942+0000 mon.a (mon.0) 351 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.637046+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.637209+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.637254+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.638615+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.638661+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:56 vm07 bash[17480]: audit 2026-03-09T14:30:56.638693+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:55.662481+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: cluster 2026-03-09T14:30:55.662657+0000 mon.a (mon.0) 346 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: cephadm 2026-03-09T14:30:56.157230+0000 mgr.y (mgr.14152) 74 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.182637+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.187586+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.193637+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.482807+0000 mon.a (mon.0) 350 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.636942+0000 mon.a (mon.0) 351 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.637046+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.637209+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.637254+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.638615+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.638661+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:57.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:56 vm07 bash[22585]: audit 2026-03-09T14:30:56.638693+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:55.662481+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: cluster 2026-03-09T14:30:55.662657+0000 mon.a (mon.0) 346 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: cephadm 2026-03-09T14:30:56.157230+0000 mgr.y (mgr.14152) 74 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.182637+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.187586+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.193637+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.482807+0000 mon.a (mon.0) 350 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.636942+0000 mon.a (mon.0) 351 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.637046+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.637209+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.637254+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.638615+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.638661+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:56 vm11 bash[17885]: audit 2026-03-09T14:30:56.638693+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.638550+0000 mon.c (mon.1) 13 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.784073+0000 mon.c (mon.1) 14 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.785097+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.107:0/2790693654' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afc54d82-66a7-42e1-83c1-0970428ef794"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.786574+0000 mon.b (mon.2) 4 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.790955+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.791485+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.791606+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.815066+0000 mon.a (mon.0) 361 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afc54d82-66a7-42e1-83c1-0970428ef794"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.934447+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: cluster 2026-03-09T14:30:56.949930+0000 mon.a (mon.0) 362 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.953096+0000 mon.a (mon.0) 363 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "afc54d82-66a7-42e1-83c1-0970428ef794"}]': finished 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: cluster 2026-03-09T14:30:56.953122+0000 mon.a (mon.0) 364 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:56.953210+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: cluster 2026-03-09T14:30:56.967613+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: cluster 2026-03-09T14:30:57.205066+0000 mon.a (mon.0) 366 : cluster [DBG] mgrmap e15: y(active, since 74s), standbys: x 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:30:57 vm07 bash[22585]: audit 2026-03-09T14:30:57.592778+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.107:0/3741393964' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.638550+0000 mon.c (mon.1) 13 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.784073+0000 mon.c (mon.1) 14 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.785097+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.107:0/2790693654' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afc54d82-66a7-42e1-83c1-0970428ef794"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.786574+0000 mon.b (mon.2) 4 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.790955+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.791485+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.791606+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.815066+0000 mon.a (mon.0) 361 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afc54d82-66a7-42e1-83c1-0970428ef794"}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.934447+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: cluster 2026-03-09T14:30:56.949930+0000 mon.a (mon.0) 362 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.953096+0000 mon.a (mon.0) 363 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "afc54d82-66a7-42e1-83c1-0970428ef794"}]': finished 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: cluster 2026-03-09T14:30:56.953122+0000 mon.a (mon.0) 364 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:56.953210+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: cluster 2026-03-09T14:30:56.967613+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: cluster 2026-03-09T14:30:57.205066+0000 mon.a (mon.0) 366 : cluster [DBG] mgrmap e15: y(active, since 74s), standbys: x 2026-03-09T14:30:58.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:30:57 vm07 bash[17480]: audit 2026-03-09T14:30:57.592778+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.107:0/3741393964' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.638550+0000 mon.c (mon.1) 13 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.784073+0000 mon.c (mon.1) 14 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.785097+0000 mon.c (mon.1) 15 : audit [INF] from='client.? 192.168.123.107:0/2790693654' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afc54d82-66a7-42e1-83c1-0970428ef794"}]: dispatch 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.786574+0000 mon.b (mon.2) 4 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.790955+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.791485+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.791606+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.815066+0000 mon.a (mon.0) 361 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afc54d82-66a7-42e1-83c1-0970428ef794"}]: dispatch 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.934447+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: cluster 2026-03-09T14:30:56.949930+0000 mon.a (mon.0) 362 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.953096+0000 mon.a (mon.0) 363 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "afc54d82-66a7-42e1-83c1-0970428ef794"}]': finished 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: cluster 2026-03-09T14:30:56.953122+0000 mon.a (mon.0) 364 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:56.953210+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: cluster 2026-03-09T14:30:56.967613+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: cluster 2026-03-09T14:30:57.205066+0000 mon.a (mon.0) 366 : cluster [DBG] mgrmap e15: y(active, since 74s), standbys: x 2026-03-09T14:30:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:30:57 vm11 bash[17885]: audit 2026-03-09T14:30:57.592778+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.107:0/3741393964' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:00.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:00 vm07 bash[22585]: cluster 2026-03-09T14:30:58.967831+0000 mgr.y (mgr.14152) 76 : cluster [DBG] pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:00.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:00 vm07 bash[17480]: cluster 2026-03-09T14:30:58.967831+0000 mgr.y (mgr.14152) 76 : cluster [DBG] pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:00.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:00 vm11 bash[17885]: cluster 2026-03-09T14:30:58.967831+0000 mgr.y (mgr.14152) 76 : cluster [DBG] pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:02.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:02 vm07 bash[22585]: cluster 2026-03-09T14:31:00.968084+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:02.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:02 vm07 bash[17480]: cluster 2026-03-09T14:31:00.968084+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:03.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:02 vm11 bash[17885]: cluster 2026-03-09T14:31:00.968084+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:03.881 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:03 vm07 bash[17480]: audit 2026-03-09T14:31:03.063339+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:31:03.882 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:03 vm07 bash[17480]: audit 2026-03-09T14:31:03.063936+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:03.882 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:03 vm07 bash[22585]: audit 2026-03-09T14:31:03.063339+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:31:03.882 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:03 vm07 bash[22585]: audit 2026-03-09T14:31:03.063936+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:03.882 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:03.882 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:31:03 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:04.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:03 vm11 bash[17885]: audit 2026-03-09T14:31:03.063339+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:31:04.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:03 vm11 bash[17885]: audit 2026-03-09T14:31:03.063936+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:04.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:04 vm07 bash[22585]: cluster 2026-03-09T14:31:02.968317+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:04.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:04 vm07 bash[22585]: cephadm 2026-03-09T14:31:03.064389+0000 mgr.y (mgr.14152) 79 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T14:31:04.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:04 vm07 bash[22585]: audit 2026-03-09T14:31:03.904591+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:04.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:04 vm07 bash[22585]: audit 2026-03-09T14:31:03.924198+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:04.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:04 vm07 bash[22585]: audit 2026-03-09T14:31:03.925086+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:04.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:04 vm07 bash[22585]: audit 2026-03-09T14:31:03.925760+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:04.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:04 vm07 bash[17480]: cluster 2026-03-09T14:31:02.968317+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:04.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:04 vm07 bash[17480]: cephadm 2026-03-09T14:31:03.064389+0000 mgr.y (mgr.14152) 79 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T14:31:04.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:04 vm07 bash[17480]: audit 2026-03-09T14:31:03.904591+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:04.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:04 vm07 bash[17480]: audit 2026-03-09T14:31:03.924198+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:04.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:04 vm07 bash[17480]: audit 2026-03-09T14:31:03.925086+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:04.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:04 vm07 bash[17480]: audit 2026-03-09T14:31:03.925760+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:05.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:04 vm11 bash[17885]: cluster 2026-03-09T14:31:02.968317+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:05.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:04 vm11 bash[17885]: cephadm 2026-03-09T14:31:03.064389+0000 mgr.y (mgr.14152) 79 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T14:31:05.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:04 vm11 bash[17885]: audit 2026-03-09T14:31:03.904591+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:05.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:04 vm11 bash[17885]: audit 2026-03-09T14:31:03.924198+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:05.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:04 vm11 bash[17885]: audit 2026-03-09T14:31:03.925086+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:05.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:04 vm11 bash[17885]: audit 2026-03-09T14:31:03.925760+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:06.839 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:06 vm07 bash[17480]: cluster 2026-03-09T14:31:04.968546+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:06.839 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:06 vm07 bash[22585]: cluster 2026-03-09T14:31:04.968546+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:07.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:06 vm11 bash[17885]: cluster 2026-03-09T14:31:04.968546+0000 mgr.y (mgr.14152) 80 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:07.232 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 3 on host 'vm07' 2026-03-09T14:31:07.290 DEBUG:teuthology.orchestra.run.vm07:osd.3> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.3.service 2026-03-09T14:31:07.291 INFO:tasks.cephadm:Deploying osd.4 on vm11 with /dev/vde... 2026-03-09T14:31:07.291 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- lvm zap /dev/vde 2026-03-09T14:31:07.840 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-09T14:31:07.850 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch daemon add osd vm11:/dev/vde 2026-03-09T14:31:08.073 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:07 vm11 bash[17885]: audit 2026-03-09T14:31:06.841951+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:08.073 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:07 vm11 bash[17885]: audit 2026-03-09T14:31:06.849215+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:08.073 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:07 vm11 bash[17885]: cluster 2026-03-09T14:31:06.968762+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:08.073 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:07 vm11 bash[17885]: audit 2026-03-09T14:31:07.027420+0000 mon.a (mon.0) 375 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:31:08.073 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:07 vm11 bash[17885]: audit 2026-03-09T14:31:07.225041+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:08.073 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:07 vm11 bash[17885]: audit 2026-03-09T14:31:07.264559+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:08.073 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:07 vm11 bash[17885]: audit 2026-03-09T14:31:07.265290+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:08.074 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:07 vm11 bash[17885]: audit 2026-03-09T14:31:07.265727+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:08.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:07 vm07 bash[22585]: audit 2026-03-09T14:31:06.841951+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:08.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:07 vm07 bash[22585]: audit 2026-03-09T14:31:06.849215+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:08.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:07 vm07 bash[22585]: cluster 2026-03-09T14:31:06.968762+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:08.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:07 vm07 bash[22585]: audit 2026-03-09T14:31:07.027420+0000 mon.a (mon.0) 375 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:31:08.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:07 vm07 bash[22585]: audit 2026-03-09T14:31:07.225041+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:08.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:07 vm07 bash[22585]: audit 2026-03-09T14:31:07.264559+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:08.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:07 vm07 bash[22585]: audit 2026-03-09T14:31:07.265290+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:08.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:07 vm07 bash[22585]: audit 2026-03-09T14:31:07.265727+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:08.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:07 vm07 bash[17480]: audit 2026-03-09T14:31:06.841951+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:08.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:07 vm07 bash[17480]: audit 2026-03-09T14:31:06.849215+0000 mon.a (mon.0) 374 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:08.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:07 vm07 bash[17480]: cluster 2026-03-09T14:31:06.968762+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:08.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:07 vm07 bash[17480]: audit 2026-03-09T14:31:07.027420+0000 mon.a (mon.0) 375 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:31:08.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:07 vm07 bash[17480]: audit 2026-03-09T14:31:07.225041+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:08.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:07 vm07 bash[17480]: audit 2026-03-09T14:31:07.264559+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:08.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:07 vm07 bash[17480]: audit 2026-03-09T14:31:07.265290+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:08.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:07 vm07 bash[17480]: audit 2026-03-09T14:31:07.265727+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:09.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:08 vm07 bash[22585]: audit 2026-03-09T14:31:07.856161+0000 mon.a (mon.0) 380 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:31:09.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:08 vm07 bash[22585]: cluster 2026-03-09T14:31:07.856196+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T14:31:09.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:08 vm07 bash[22585]: audit 2026-03-09T14:31:07.857403+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:09.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:08 vm07 bash[22585]: audit 2026-03-09T14:31:07.857524+0000 mon.a (mon.0) 383 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:31:09.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:08 vm07 bash[22585]: audit 2026-03-09T14:31:08.238875+0000 mgr.y (mgr.14152) 82 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:09.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:08 vm07 bash[22585]: audit 2026-03-09T14:31:08.240036+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:09.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:08 vm07 bash[22585]: audit 2026-03-09T14:31:08.241546+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:09.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:08 vm07 bash[22585]: audit 2026-03-09T14:31:08.241933+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:09.168 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:31:08 vm07 bash[34782]: debug 2026-03-09T14:31:08.860+0000 7f6c132e1700 -1 osd.3 0 waiting for initial osdmap 2026-03-09T14:31:09.168 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:31:08 vm07 bash[34782]: debug 2026-03-09T14:31:08.868+0000 7f6c0e479700 -1 osd.3 24 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:31:09.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:08 vm07 bash[17480]: audit 2026-03-09T14:31:07.856161+0000 mon.a (mon.0) 380 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:31:09.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:08 vm07 bash[17480]: cluster 2026-03-09T14:31:07.856196+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T14:31:09.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:08 vm07 bash[17480]: audit 2026-03-09T14:31:07.857403+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:09.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:08 vm07 bash[17480]: audit 2026-03-09T14:31:07.857524+0000 mon.a (mon.0) 383 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:31:09.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:08 vm07 bash[17480]: audit 2026-03-09T14:31:08.238875+0000 mgr.y (mgr.14152) 82 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:09.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:08 vm07 bash[17480]: audit 2026-03-09T14:31:08.240036+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:09.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:08 vm07 bash[17480]: audit 2026-03-09T14:31:08.241546+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:09.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:08 vm07 bash[17480]: audit 2026-03-09T14:31:08.241933+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:08 vm11 bash[17885]: audit 2026-03-09T14:31:07.856161+0000 mon.a (mon.0) 380 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:31:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:08 vm11 bash[17885]: cluster 2026-03-09T14:31:07.856196+0000 mon.a (mon.0) 381 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-09T14:31:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:08 vm11 bash[17885]: audit 2026-03-09T14:31:07.857403+0000 mon.a (mon.0) 382 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:08 vm11 bash[17885]: audit 2026-03-09T14:31:07.857524+0000 mon.a (mon.0) 383 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:31:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:08 vm11 bash[17885]: audit 2026-03-09T14:31:08.238875+0000 mgr.y (mgr.14152) 82 : audit [DBG] from='client.24179 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:08 vm11 bash[17885]: audit 2026-03-09T14:31:08.240036+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:08 vm11 bash[17885]: audit 2026-03-09T14:31:08.241546+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:09.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:08 vm11 bash[17885]: audit 2026-03-09T14:31:08.241933+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:10.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:09 vm07 bash[22585]: audit 2026-03-09T14:31:08.858885+0000 mon.a (mon.0) 387 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:31:10.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:09 vm07 bash[22585]: cluster 2026-03-09T14:31:08.858946+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T14:31:10.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:09 vm07 bash[22585]: audit 2026-03-09T14:31:08.861692+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:10.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:09 vm07 bash[22585]: audit 2026-03-09T14:31:08.869767+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:10.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:09 vm07 bash[22585]: cluster 2026-03-09T14:31:08.968952+0000 mgr.y (mgr.14152) 83 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:10.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:09 vm07 bash[22585]: audit 2026-03-09T14:31:09.864235+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:10.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:09 vm07 bash[17480]: audit 2026-03-09T14:31:08.858885+0000 mon.a (mon.0) 387 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:31:10.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:09 vm07 bash[17480]: cluster 2026-03-09T14:31:08.858946+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T14:31:10.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:09 vm07 bash[17480]: audit 2026-03-09T14:31:08.861692+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:10.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:09 vm07 bash[17480]: audit 2026-03-09T14:31:08.869767+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:10.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:09 vm07 bash[17480]: cluster 2026-03-09T14:31:08.968952+0000 mgr.y (mgr.14152) 83 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:10.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:09 vm07 bash[17480]: audit 2026-03-09T14:31:09.864235+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:10.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:09 vm11 bash[17885]: audit 2026-03-09T14:31:08.858885+0000 mon.a (mon.0) 387 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-09T14:31:10.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:09 vm11 bash[17885]: cluster 2026-03-09T14:31:08.858946+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-09T14:31:10.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:09 vm11 bash[17885]: audit 2026-03-09T14:31:08.861692+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:10.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:09 vm11 bash[17885]: audit 2026-03-09T14:31:08.869767+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:10.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:09 vm11 bash[17885]: cluster 2026-03-09T14:31:08.968952+0000 mgr.y (mgr.14152) 83 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:31:10.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:09 vm11 bash[17885]: audit 2026-03-09T14:31:09.864235+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:11.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:10 vm07 bash[22585]: cluster 2026-03-09T14:31:08.073088+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:11.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:10 vm07 bash[22585]: cluster 2026-03-09T14:31:08.073167+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:11.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:10 vm07 bash[22585]: cluster 2026-03-09T14:31:09.868120+0000 mon.a (mon.0) 392 : cluster [INF] osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280] boot 2026-03-09T14:31:11.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:10 vm07 bash[22585]: cluster 2026-03-09T14:31:09.868138+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T14:31:11.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:10 vm07 bash[22585]: audit 2026-03-09T14:31:09.871305+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:11.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:10 vm07 bash[17480]: cluster 2026-03-09T14:31:08.073088+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:11.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:10 vm07 bash[17480]: cluster 2026-03-09T14:31:08.073167+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:11.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:10 vm07 bash[17480]: cluster 2026-03-09T14:31:09.868120+0000 mon.a (mon.0) 392 : cluster [INF] osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280] boot 2026-03-09T14:31:11.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:10 vm07 bash[17480]: cluster 2026-03-09T14:31:09.868138+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T14:31:11.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:10 vm07 bash[17480]: audit 2026-03-09T14:31:09.871305+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:11.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:10 vm11 bash[17885]: cluster 2026-03-09T14:31:08.073088+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:11.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:10 vm11 bash[17885]: cluster 2026-03-09T14:31:08.073167+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:11.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:10 vm11 bash[17885]: cluster 2026-03-09T14:31:09.868120+0000 mon.a (mon.0) 392 : cluster [INF] osd.3 [v2:192.168.123.107:6826/2142580280,v1:192.168.123.107:6827/2142580280] boot 2026-03-09T14:31:11.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:10 vm11 bash[17885]: cluster 2026-03-09T14:31:09.868138+0000 mon.a (mon.0) 393 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-09T14:31:11.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:10 vm11 bash[17885]: audit 2026-03-09T14:31:09.871305+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:31:12.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:11 vm07 bash[22585]: cluster 2026-03-09T14:31:10.882434+0000 mon.a (mon.0) 395 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:11 vm07 bash[22585]: cluster 2026-03-09T14:31:10.969169+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:11 vm07 bash[22585]: audit 2026-03-09T14:31:11.412969+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.111:0/133185966' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e6cc346-4281-49a1-9886-18c25e9addfc"}]: dispatch 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:11 vm07 bash[22585]: audit 2026-03-09T14:31:11.413108+0000 mon.a (mon.0) 396 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e6cc346-4281-49a1-9886-18c25e9addfc"}]: dispatch 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:11 vm07 bash[22585]: audit 2026-03-09T14:31:11.418573+0000 mon.a (mon.0) 397 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e6cc346-4281-49a1-9886-18c25e9addfc"}]': finished 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:11 vm07 bash[22585]: cluster 2026-03-09T14:31:11.418607+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:11 vm07 bash[22585]: audit 2026-03-09T14:31:11.418667+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:11 vm07 bash[22585]: audit 2026-03-09T14:31:11.634314+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:11 vm07 bash[22585]: audit 2026-03-09T14:31:11.634959+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:11 vm07 bash[22585]: audit 2026-03-09T14:31:11.638853+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:11 vm07 bash[17480]: cluster 2026-03-09T14:31:10.882434+0000 mon.a (mon.0) 395 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:11 vm07 bash[17480]: cluster 2026-03-09T14:31:10.969169+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:11 vm07 bash[17480]: audit 2026-03-09T14:31:11.412969+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.111:0/133185966' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e6cc346-4281-49a1-9886-18c25e9addfc"}]: dispatch 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:11 vm07 bash[17480]: audit 2026-03-09T14:31:11.413108+0000 mon.a (mon.0) 396 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e6cc346-4281-49a1-9886-18c25e9addfc"}]: dispatch 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:11 vm07 bash[17480]: audit 2026-03-09T14:31:11.418573+0000 mon.a (mon.0) 397 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e6cc346-4281-49a1-9886-18c25e9addfc"}]': finished 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:11 vm07 bash[17480]: cluster 2026-03-09T14:31:11.418607+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:11 vm07 bash[17480]: audit 2026-03-09T14:31:11.418667+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:11 vm07 bash[17480]: audit 2026-03-09T14:31:11.634314+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:11 vm07 bash[17480]: audit 2026-03-09T14:31:11.634959+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:12.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:11 vm07 bash[17480]: audit 2026-03-09T14:31:11.638853+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:12.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:11 vm11 bash[17885]: cluster 2026-03-09T14:31:10.882434+0000 mon.a (mon.0) 395 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-09T14:31:12.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:11 vm11 bash[17885]: cluster 2026-03-09T14:31:10.969169+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:12.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:11 vm11 bash[17885]: audit 2026-03-09T14:31:11.412969+0000 mon.b (mon.2) 6 : audit [INF] from='client.? 192.168.123.111:0/133185966' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e6cc346-4281-49a1-9886-18c25e9addfc"}]: dispatch 2026-03-09T14:31:12.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:11 vm11 bash[17885]: audit 2026-03-09T14:31:11.413108+0000 mon.a (mon.0) 396 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e6cc346-4281-49a1-9886-18c25e9addfc"}]: dispatch 2026-03-09T14:31:12.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:11 vm11 bash[17885]: audit 2026-03-09T14:31:11.418573+0000 mon.a (mon.0) 397 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e6cc346-4281-49a1-9886-18c25e9addfc"}]': finished 2026-03-09T14:31:12.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:11 vm11 bash[17885]: cluster 2026-03-09T14:31:11.418607+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-09T14:31:12.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:11 vm11 bash[17885]: audit 2026-03-09T14:31:11.418667+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:12.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:11 vm11 bash[17885]: audit 2026-03-09T14:31:11.634314+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:12.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:11 vm11 bash[17885]: audit 2026-03-09T14:31:11.634959+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:12.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:11 vm11 bash[17885]: audit 2026-03-09T14:31:11.638853+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:13.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:12 vm07 bash[22585]: cephadm 2026-03-09T14:31:11.628474+0000 mgr.y (mgr.14152) 85 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:31:13.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:12 vm07 bash[22585]: audit 2026-03-09T14:31:12.058700+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.111:0/2677401379' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:13.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:12 vm07 bash[17480]: cephadm 2026-03-09T14:31:11.628474+0000 mgr.y (mgr.14152) 85 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:31:13.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:12 vm07 bash[17480]: audit 2026-03-09T14:31:12.058700+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.111:0/2677401379' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:13.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:12 vm11 bash[17885]: cephadm 2026-03-09T14:31:11.628474+0000 mgr.y (mgr.14152) 85 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:31:13.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:12 vm11 bash[17885]: audit 2026-03-09T14:31:12.058700+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.111:0/2677401379' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:14.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:13 vm07 bash[22585]: cluster 2026-03-09T14:31:12.969466+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:14.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:13 vm07 bash[17480]: cluster 2026-03-09T14:31:12.969466+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:14.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:13 vm11 bash[17885]: cluster 2026-03-09T14:31:12.969466+0000 mgr.y (mgr.14152) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:16.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:16 vm07 bash[22585]: cluster 2026-03-09T14:31:14.969732+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:16.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:16 vm07 bash[17480]: cluster 2026-03-09T14:31:14.969732+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:17.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:16 vm11 bash[17885]: cluster 2026-03-09T14:31:14.969732+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:17.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:17 vm11 bash[17885]: audit 2026-03-09T14:31:17.456945+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:31:17.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:17 vm11 bash[17885]: audit 2026-03-09T14:31:17.457383+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:17.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:17 vm07 bash[22585]: audit 2026-03-09T14:31:17.456945+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:31:17.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:17 vm07 bash[22585]: audit 2026-03-09T14:31:17.457383+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:17.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:17 vm07 bash[17480]: audit 2026-03-09T14:31:17.456945+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:31:17.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:17 vm07 bash[17480]: audit 2026-03-09T14:31:17.457383+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:18.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:18.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:18.261 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:31:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:18.261 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:31:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:18.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:18 vm07 bash[22585]: cluster 2026-03-09T14:31:16.969998+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:18 vm07 bash[22585]: cephadm 2026-03-09T14:31:17.457744+0000 mgr.y (mgr.14152) 89 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:18 vm07 bash[22585]: audit 2026-03-09T14:31:18.268841+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:18 vm07 bash[22585]: audit 2026-03-09T14:31:18.269641+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:18 vm07 bash[22585]: audit 2026-03-09T14:31:18.270114+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:18 vm07 bash[22585]: audit 2026-03-09T14:31:18.276169+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:18 vm07 bash[17480]: cluster 2026-03-09T14:31:16.969998+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:18 vm07 bash[17480]: cephadm 2026-03-09T14:31:17.457744+0000 mgr.y (mgr.14152) 89 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:18 vm07 bash[17480]: audit 2026-03-09T14:31:18.268841+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:18 vm07 bash[17480]: audit 2026-03-09T14:31:18.269641+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:18 vm07 bash[17480]: audit 2026-03-09T14:31:18.270114+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:18.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:18 vm07 bash[17480]: audit 2026-03-09T14:31:18.276169+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:18 vm11 bash[17885]: cluster 2026-03-09T14:31:16.969998+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:18 vm11 bash[17885]: cephadm 2026-03-09T14:31:17.457744+0000 mgr.y (mgr.14152) 89 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-09T14:31:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:18 vm11 bash[17885]: audit 2026-03-09T14:31:18.268841+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:18 vm11 bash[17885]: audit 2026-03-09T14:31:18.269641+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:18 vm11 bash[17885]: audit 2026-03-09T14:31:18.270114+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:19.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:18 vm11 bash[17885]: audit 2026-03-09T14:31:18.276169+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:20.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:20 vm07 bash[22585]: cluster 2026-03-09T14:31:18.970287+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:20.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:20 vm07 bash[17480]: cluster 2026-03-09T14:31:18.970287+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:20.952 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:20 vm11 bash[17885]: cluster 2026-03-09T14:31:18.970287+0000 mgr.y (mgr.14152) 90 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:21.514 INFO:teuthology.orchestra.run.vm11.stdout:Created osd(s) 4 on host 'vm11' 2026-03-09T14:31:21.577 DEBUG:teuthology.orchestra.run.vm11:osd.4> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.4.service 2026-03-09T14:31:21.578 INFO:tasks.cephadm:Deploying osd.5 on vm11 with /dev/vdd... 2026-03-09T14:31:21.578 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- lvm zap /dev/vdd 2026-03-09T14:31:22.164 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-09T14:31:22.172 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch daemon add osd vm11:/dev/vdd 2026-03-09T14:31:22.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:22 vm11 bash[17885]: cluster 2026-03-09T14:31:20.970538+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:22.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:22 vm11 bash[17885]: audit 2026-03-09T14:31:21.134708+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:22.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:22 vm11 bash[17885]: audit 2026-03-09T14:31:21.139799+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:22.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:22 vm11 bash[17885]: audit 2026-03-09T14:31:21.367386+0000 mon.a (mon.0) 411 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:31:22.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:22 vm11 bash[17885]: audit 2026-03-09T14:31:21.367402+0000 mon.b (mon.2) 8 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/2733246535,v1:192.168.123.111:6801/2733246535]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:31:22.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:22 vm11 bash[17885]: audit 2026-03-09T14:31:21.504912+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:22.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:22 vm11 bash[17885]: audit 2026-03-09T14:31:21.507603+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:22.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:22 vm11 bash[17885]: audit 2026-03-09T14:31:21.507907+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:22.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:22 vm11 bash[17885]: audit 2026-03-09T14:31:21.508920+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:22 vm07 bash[22585]: cluster 2026-03-09T14:31:20.970538+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:22 vm07 bash[22585]: audit 2026-03-09T14:31:21.134708+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:22 vm07 bash[22585]: audit 2026-03-09T14:31:21.139799+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:22 vm07 bash[22585]: audit 2026-03-09T14:31:21.367386+0000 mon.a (mon.0) 411 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:22 vm07 bash[22585]: audit 2026-03-09T14:31:21.367402+0000 mon.b (mon.2) 8 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/2733246535,v1:192.168.123.111:6801/2733246535]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:22 vm07 bash[22585]: audit 2026-03-09T14:31:21.504912+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:22 vm07 bash[22585]: audit 2026-03-09T14:31:21.507603+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:22 vm07 bash[22585]: audit 2026-03-09T14:31:21.507907+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:22 vm07 bash[22585]: audit 2026-03-09T14:31:21.508920+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:22 vm07 bash[17480]: cluster 2026-03-09T14:31:20.970538+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:22 vm07 bash[17480]: audit 2026-03-09T14:31:21.134708+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:22 vm07 bash[17480]: audit 2026-03-09T14:31:21.139799+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:22 vm07 bash[17480]: audit 2026-03-09T14:31:21.367386+0000 mon.a (mon.0) 411 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:22 vm07 bash[17480]: audit 2026-03-09T14:31:21.367402+0000 mon.b (mon.2) 8 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/2733246535,v1:192.168.123.111:6801/2733246535]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:22 vm07 bash[17480]: audit 2026-03-09T14:31:21.504912+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:22 vm07 bash[17480]: audit 2026-03-09T14:31:21.507603+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:22 vm07 bash[17480]: audit 2026-03-09T14:31:21.507907+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:22.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:22 vm07 bash[17480]: audit 2026-03-09T14:31:21.508920+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:23 vm07 bash[22585]: audit 2026-03-09T14:31:22.149859+0000 mon.a (mon.0) 416 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:23 vm07 bash[22585]: cluster 2026-03-09T14:31:22.150047+0000 mon.a (mon.0) 417 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:23 vm07 bash[22585]: audit 2026-03-09T14:31:22.150233+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:23 vm07 bash[22585]: audit 2026-03-09T14:31:22.150750+0000 mon.a (mon.0) 419 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:23 vm07 bash[22585]: audit 2026-03-09T14:31:22.150803+0000 mon.b (mon.2) 9 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/2733246535,v1:192.168.123.111:6801/2733246535]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:23 vm07 bash[22585]: audit 2026-03-09T14:31:22.529804+0000 mgr.y (mgr.14152) 92 : audit [DBG] from='client.24206 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:23 vm07 bash[22585]: audit 2026-03-09T14:31:22.531101+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:23 vm07 bash[22585]: audit 2026-03-09T14:31:22.532205+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:23 vm07 bash[22585]: audit 2026-03-09T14:31:22.532552+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:23 vm07 bash[17480]: audit 2026-03-09T14:31:22.149859+0000 mon.a (mon.0) 416 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:23 vm07 bash[17480]: cluster 2026-03-09T14:31:22.150047+0000 mon.a (mon.0) 417 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:23 vm07 bash[17480]: audit 2026-03-09T14:31:22.150233+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:23 vm07 bash[17480]: audit 2026-03-09T14:31:22.150750+0000 mon.a (mon.0) 419 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:23 vm07 bash[17480]: audit 2026-03-09T14:31:22.150803+0000 mon.b (mon.2) 9 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/2733246535,v1:192.168.123.111:6801/2733246535]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:23 vm07 bash[17480]: audit 2026-03-09T14:31:22.529804+0000 mgr.y (mgr.14152) 92 : audit [DBG] from='client.24206 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:23 vm07 bash[17480]: audit 2026-03-09T14:31:22.531101+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:23 vm07 bash[17480]: audit 2026-03-09T14:31:22.532205+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:23.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:23 vm07 bash[17480]: audit 2026-03-09T14:31:22.532552+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:23.511 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:31:23 vm11 bash[20835]: debug 2026-03-09T14:31:23.162+0000 7fbbce34a700 -1 osd.4 0 waiting for initial osdmap 2026-03-09T14:31:23.511 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:31:23 vm11 bash[20835]: debug 2026-03-09T14:31:23.166+0000 7fbbc6cdd700 -1 osd.4 29 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:31:23.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:23 vm11 bash[17885]: audit 2026-03-09T14:31:22.149859+0000 mon.a (mon.0) 416 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:31:23.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:23 vm11 bash[17885]: cluster 2026-03-09T14:31:22.150047+0000 mon.a (mon.0) 417 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-09T14:31:23.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:23 vm11 bash[17885]: audit 2026-03-09T14:31:22.150233+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:23.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:23 vm11 bash[17885]: audit 2026-03-09T14:31:22.150750+0000 mon.a (mon.0) 419 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:23.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:23 vm11 bash[17885]: audit 2026-03-09T14:31:22.150803+0000 mon.b (mon.2) 9 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/2733246535,v1:192.168.123.111:6801/2733246535]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:23.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:23 vm11 bash[17885]: audit 2026-03-09T14:31:22.529804+0000 mgr.y (mgr.14152) 92 : audit [DBG] from='client.24206 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:23.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:23 vm11 bash[17885]: audit 2026-03-09T14:31:22.531101+0000 mon.a (mon.0) 420 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:23.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:23 vm11 bash[17885]: audit 2026-03-09T14:31:22.532205+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:23.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:23 vm11 bash[17885]: audit 2026-03-09T14:31:22.532552+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:24 vm07 bash[22585]: cluster 2026-03-09T14:31:22.970789+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:24 vm07 bash[22585]: audit 2026-03-09T14:31:23.154788+0000 mon.a (mon.0) 423 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:24 vm07 bash[22585]: cluster 2026-03-09T14:31:23.154869+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:24 vm07 bash[22585]: audit 2026-03-09T14:31:23.155878+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:24 vm07 bash[22585]: audit 2026-03-09T14:31:23.157450+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:24 vm07 bash[22585]: cluster 2026-03-09T14:31:24.158238+0000 mon.a (mon.0) 427 : cluster [INF] osd.4 [v2:192.168.123.111:6800/2733246535,v1:192.168.123.111:6801/2733246535] boot 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:24 vm07 bash[22585]: cluster 2026-03-09T14:31:24.158291+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:24 vm07 bash[17480]: cluster 2026-03-09T14:31:22.970789+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:24 vm07 bash[17480]: audit 2026-03-09T14:31:23.154788+0000 mon.a (mon.0) 423 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:24 vm07 bash[17480]: cluster 2026-03-09T14:31:23.154869+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:24 vm07 bash[17480]: audit 2026-03-09T14:31:23.155878+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:24 vm07 bash[17480]: audit 2026-03-09T14:31:23.157450+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:24 vm07 bash[17480]: cluster 2026-03-09T14:31:24.158238+0000 mon.a (mon.0) 427 : cluster [INF] osd.4 [v2:192.168.123.111:6800/2733246535,v1:192.168.123.111:6801/2733246535] boot 2026-03-09T14:31:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:24 vm07 bash[17480]: cluster 2026-03-09T14:31:24.158291+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T14:31:24.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:24 vm11 bash[17885]: cluster 2026-03-09T14:31:22.970789+0000 mgr.y (mgr.14152) 93 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:31:24.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:24 vm11 bash[17885]: audit 2026-03-09T14:31:23.154788+0000 mon.a (mon.0) 423 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:31:24.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:24 vm11 bash[17885]: cluster 2026-03-09T14:31:23.154869+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-09T14:31:24.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:24 vm11 bash[17885]: audit 2026-03-09T14:31:23.155878+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:24.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:24 vm11 bash[17885]: audit 2026-03-09T14:31:23.157450+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:24.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:24 vm11 bash[17885]: cluster 2026-03-09T14:31:24.158238+0000 mon.a (mon.0) 427 : cluster [INF] osd.4 [v2:192.168.123.111:6800/2733246535,v1:192.168.123.111:6801/2733246535] boot 2026-03-09T14:31:24.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:24 vm11 bash[17885]: cluster 2026-03-09T14:31:24.158291+0000 mon.a (mon.0) 428 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-09T14:31:25.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:25 vm11 bash[17885]: cluster 2026-03-09T14:31:22.394796+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:25.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:25 vm11 bash[17885]: cluster 2026-03-09T14:31:22.394870+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:25.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:25 vm11 bash[17885]: audit 2026-03-09T14:31:24.159029+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:25.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:25 vm11 bash[17885]: cluster 2026-03-09T14:31:25.162276+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T14:31:25.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:25 vm07 bash[22585]: cluster 2026-03-09T14:31:22.394796+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:25.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:25 vm07 bash[22585]: cluster 2026-03-09T14:31:22.394870+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:25.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:25 vm07 bash[22585]: audit 2026-03-09T14:31:24.159029+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:25.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:25 vm07 bash[22585]: cluster 2026-03-09T14:31:25.162276+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T14:31:25.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:25 vm07 bash[17480]: cluster 2026-03-09T14:31:22.394796+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:25.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:25 vm07 bash[17480]: cluster 2026-03-09T14:31:22.394870+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:25.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:25 vm07 bash[17480]: audit 2026-03-09T14:31:24.159029+0000 mon.a (mon.0) 429 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:31:25.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:25 vm07 bash[17480]: cluster 2026-03-09T14:31:25.162276+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-09T14:31:26.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:26 vm07 bash[22585]: cluster 2026-03-09T14:31:24.971061+0000 mgr.y (mgr.14152) 94 : cluster [DBG] pgmap v71: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:26.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:26 vm07 bash[22585]: audit 2026-03-09T14:31:25.624524+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:26.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:26 vm07 bash[22585]: audit 2026-03-09T14:31:25.625345+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:26.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:26 vm07 bash[22585]: audit 2026-03-09T14:31:25.629019+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:26.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:26 vm07 bash[22585]: cluster 2026-03-09T14:31:26.162747+0000 mon.a (mon.0) 434 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T14:31:26.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:26 vm07 bash[17480]: cluster 2026-03-09T14:31:24.971061+0000 mgr.y (mgr.14152) 94 : cluster [DBG] pgmap v71: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:26.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:26 vm07 bash[17480]: audit 2026-03-09T14:31:25.624524+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:26.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:26 vm07 bash[17480]: audit 2026-03-09T14:31:25.625345+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:26.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:26 vm07 bash[17480]: audit 2026-03-09T14:31:25.629019+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:26.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:26 vm07 bash[17480]: cluster 2026-03-09T14:31:26.162747+0000 mon.a (mon.0) 434 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T14:31:27.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:26 vm11 bash[17885]: cluster 2026-03-09T14:31:24.971061+0000 mgr.y (mgr.14152) 94 : cluster [DBG] pgmap v71: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:27.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:26 vm11 bash[17885]: audit 2026-03-09T14:31:25.624524+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:27.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:26 vm11 bash[17885]: audit 2026-03-09T14:31:25.625345+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:27.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:26 vm11 bash[17885]: audit 2026-03-09T14:31:25.629019+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:27.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:26 vm11 bash[17885]: cluster 2026-03-09T14:31:26.162747+0000 mon.a (mon.0) 434 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:27 vm07 bash[22585]: cephadm 2026-03-09T14:31:25.619654+0000 mgr.y (mgr.14152) 95 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:27 vm07 bash[22585]: cephadm 2026-03-09T14:31:25.625717+0000 mgr.y (mgr.14152) 96 : cephadm [INF] Adjusting osd_memory_target on vm11 to 455.7M 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:27 vm07 bash[22585]: cephadm 2026-03-09T14:31:25.626114+0000 mgr.y (mgr.14152) 97 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:27 vm07 bash[22585]: audit 2026-03-09T14:31:26.657287+0000 mon.b (mon.2) 10 : audit [INF] from='client.? 192.168.123.111:0/3076707195' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "104be397-ca1c-4a2d-ae2d-97efa37d095a"}]: dispatch 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:27 vm07 bash[22585]: audit 2026-03-09T14:31:26.657322+0000 mon.a (mon.0) 435 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "104be397-ca1c-4a2d-ae2d-97efa37d095a"}]: dispatch 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:27 vm07 bash[22585]: audit 2026-03-09T14:31:26.662929+0000 mon.a (mon.0) 436 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "104be397-ca1c-4a2d-ae2d-97efa37d095a"}]': finished 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:27 vm07 bash[22585]: cluster 2026-03-09T14:31:26.662952+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:27 vm07 bash[22585]: audit 2026-03-09T14:31:26.662996+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:27 vm07 bash[22585]: audit 2026-03-09T14:31:27.276127+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.111:0/1658417674' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:27 vm07 bash[17480]: cephadm 2026-03-09T14:31:25.619654+0000 mgr.y (mgr.14152) 95 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:27 vm07 bash[17480]: cephadm 2026-03-09T14:31:25.625717+0000 mgr.y (mgr.14152) 96 : cephadm [INF] Adjusting osd_memory_target on vm11 to 455.7M 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:27 vm07 bash[17480]: cephadm 2026-03-09T14:31:25.626114+0000 mgr.y (mgr.14152) 97 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:27 vm07 bash[17480]: audit 2026-03-09T14:31:26.657287+0000 mon.b (mon.2) 10 : audit [INF] from='client.? 192.168.123.111:0/3076707195' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "104be397-ca1c-4a2d-ae2d-97efa37d095a"}]: dispatch 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:27 vm07 bash[17480]: audit 2026-03-09T14:31:26.657322+0000 mon.a (mon.0) 435 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "104be397-ca1c-4a2d-ae2d-97efa37d095a"}]: dispatch 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:27 vm07 bash[17480]: audit 2026-03-09T14:31:26.662929+0000 mon.a (mon.0) 436 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "104be397-ca1c-4a2d-ae2d-97efa37d095a"}]': finished 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:27 vm07 bash[17480]: cluster 2026-03-09T14:31:26.662952+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:27 vm07 bash[17480]: audit 2026-03-09T14:31:26.662996+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:27.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:27 vm07 bash[17480]: audit 2026-03-09T14:31:27.276127+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.111:0/1658417674' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:27 vm11 bash[17885]: cephadm 2026-03-09T14:31:25.619654+0000 mgr.y (mgr.14152) 95 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:31:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:27 vm11 bash[17885]: cephadm 2026-03-09T14:31:25.625717+0000 mgr.y (mgr.14152) 96 : cephadm [INF] Adjusting osd_memory_target on vm11 to 455.7M 2026-03-09T14:31:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:27 vm11 bash[17885]: cephadm 2026-03-09T14:31:25.626114+0000 mgr.y (mgr.14152) 97 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-09T14:31:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:27 vm11 bash[17885]: audit 2026-03-09T14:31:26.657287+0000 mon.b (mon.2) 10 : audit [INF] from='client.? 192.168.123.111:0/3076707195' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "104be397-ca1c-4a2d-ae2d-97efa37d095a"}]: dispatch 2026-03-09T14:31:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:27 vm11 bash[17885]: audit 2026-03-09T14:31:26.657322+0000 mon.a (mon.0) 435 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "104be397-ca1c-4a2d-ae2d-97efa37d095a"}]: dispatch 2026-03-09T14:31:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:27 vm11 bash[17885]: audit 2026-03-09T14:31:26.662929+0000 mon.a (mon.0) 436 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "104be397-ca1c-4a2d-ae2d-97efa37d095a"}]': finished 2026-03-09T14:31:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:27 vm11 bash[17885]: cluster 2026-03-09T14:31:26.662952+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-09T14:31:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:27 vm11 bash[17885]: audit 2026-03-09T14:31:26.662996+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:28.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:27 vm11 bash[17885]: audit 2026-03-09T14:31:27.276127+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.111:0/1658417674' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:28.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:28 vm07 bash[22585]: cluster 2026-03-09T14:31:26.971306+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v75: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:28.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:28 vm07 bash[17480]: cluster 2026-03-09T14:31:26.971306+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v75: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:29.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:28 vm11 bash[17885]: cluster 2026-03-09T14:31:26.971306+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v75: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:30.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:30 vm07 bash[22585]: cluster 2026-03-09T14:31:28.971547+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v76: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:30.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:30 vm07 bash[17480]: cluster 2026-03-09T14:31:28.971547+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v76: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:31.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:30 vm11 bash[17885]: cluster 2026-03-09T14:31:28.971547+0000 mgr.y (mgr.14152) 99 : cluster [DBG] pgmap v76: 1 pgs: 1 remapped+peering; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:32.873 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:32 vm11 bash[17885]: cluster 2026-03-09T14:31:30.971784+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T14:31:32.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:32 vm07 bash[22585]: cluster 2026-03-09T14:31:30.971784+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T14:31:32.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:32 vm07 bash[17480]: cluster 2026-03-09T14:31:30.971784+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 65 KiB/s, 0 objects/s recovering 2026-03-09T14:31:33.428 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:33.428 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:33.428 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:31:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:33.428 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:31:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:33.428 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:31:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:33.428 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:31:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:33.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:33 vm11 bash[17885]: audit 2026-03-09T14:31:32.647089+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:31:33.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:33 vm11 bash[17885]: audit 2026-03-09T14:31:32.647578+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:33.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:33 vm11 bash[17885]: audit 2026-03-09T14:31:33.444691+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:33.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:33 vm11 bash[17885]: audit 2026-03-09T14:31:33.454057+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:33.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:33 vm11 bash[17885]: audit 2026-03-09T14:31:33.454711+0000 mon.a (mon.0) 443 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:33.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:33 vm11 bash[17885]: audit 2026-03-09T14:31:33.455204+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:33 vm07 bash[22585]: audit 2026-03-09T14:31:32.647089+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:33 vm07 bash[22585]: audit 2026-03-09T14:31:32.647578+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:33 vm07 bash[22585]: audit 2026-03-09T14:31:33.444691+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:33 vm07 bash[22585]: audit 2026-03-09T14:31:33.454057+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:33 vm07 bash[22585]: audit 2026-03-09T14:31:33.454711+0000 mon.a (mon.0) 443 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:33 vm07 bash[22585]: audit 2026-03-09T14:31:33.455204+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:33 vm07 bash[17480]: audit 2026-03-09T14:31:32.647089+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:33 vm07 bash[17480]: audit 2026-03-09T14:31:32.647578+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:33 vm07 bash[17480]: audit 2026-03-09T14:31:33.444691+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:33 vm07 bash[17480]: audit 2026-03-09T14:31:33.454057+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:33 vm07 bash[17480]: audit 2026-03-09T14:31:33.454711+0000 mon.a (mon.0) 443 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:33.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:33 vm07 bash[17480]: audit 2026-03-09T14:31:33.455204+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:34.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:34 vm07 bash[22585]: cephadm 2026-03-09T14:31:32.647958+0000 mgr.y (mgr.14152) 101 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-09T14:31:34.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:34 vm07 bash[22585]: cluster 2026-03-09T14:31:32.972011+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-09T14:31:34.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:34 vm07 bash[17480]: cephadm 2026-03-09T14:31:32.647958+0000 mgr.y (mgr.14152) 101 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-09T14:31:34.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:34 vm07 bash[17480]: cluster 2026-03-09T14:31:32.972011+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-09T14:31:35.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:34 vm11 bash[17885]: cephadm 2026-03-09T14:31:32.647958+0000 mgr.y (mgr.14152) 101 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-09T14:31:35.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:34 vm11 bash[17885]: cluster 2026-03-09T14:31:32.972011+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-09T14:31:36.677 INFO:teuthology.orchestra.run.vm11.stdout:Created osd(s) 5 on host 'vm11' 2026-03-09T14:31:36.732 DEBUG:teuthology.orchestra.run.vm11:osd.5> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.5.service 2026-03-09T14:31:36.733 INFO:tasks.cephadm:Deploying osd.6 on vm11 with /dev/vdc... 2026-03-09T14:31:36.733 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- lvm zap /dev/vdc 2026-03-09T14:31:36.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:36 vm11 bash[17885]: cluster 2026-03-09T14:31:34.972226+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-09T14:31:36.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:36 vm11 bash[17885]: audit 2026-03-09T14:31:36.310011+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:36.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:36 vm11 bash[17885]: audit 2026-03-09T14:31:36.315801+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:36.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:36 vm11 bash[17885]: audit 2026-03-09T14:31:36.501551+0000 mon.a (mon.0) 447 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:31:36.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:36 vm11 bash[17885]: audit 2026-03-09T14:31:36.501619+0000 mon.b (mon.2) 12 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/122506048,v1:192.168.123.111:6809/122506048]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:31:36.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:36 vm07 bash[22585]: cluster 2026-03-09T14:31:34.972226+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-09T14:31:36.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:36 vm07 bash[22585]: audit 2026-03-09T14:31:36.310011+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:36.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:36 vm07 bash[22585]: audit 2026-03-09T14:31:36.315801+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:36.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:36 vm07 bash[22585]: audit 2026-03-09T14:31:36.501551+0000 mon.a (mon.0) 447 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:31:36.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:36 vm07 bash[22585]: audit 2026-03-09T14:31:36.501619+0000 mon.b (mon.2) 12 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/122506048,v1:192.168.123.111:6809/122506048]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:31:36.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:36 vm07 bash[17480]: cluster 2026-03-09T14:31:34.972226+0000 mgr.y (mgr.14152) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-09T14:31:36.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:36 vm07 bash[17480]: audit 2026-03-09T14:31:36.310011+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:36.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:36 vm07 bash[17480]: audit 2026-03-09T14:31:36.315801+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:36.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:36 vm07 bash[17480]: audit 2026-03-09T14:31:36.501551+0000 mon.a (mon.0) 447 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:31:36.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:36 vm07 bash[17480]: audit 2026-03-09T14:31:36.501619+0000 mon.b (mon.2) 12 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/122506048,v1:192.168.123.111:6809/122506048]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:31:37.256 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-09T14:31:37.267 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch daemon add osd vm11:/dev/vdc 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:36.670698+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:36.681597+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:36.682239+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:36.682631+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: cluster 2026-03-09T14:31:36.972468+0000 mgr.y (mgr.14152) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:37.321790+0000 mon.a (mon.0) 452 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: cluster 2026-03-09T14:31:37.321860+0000 mon.a (mon.0) 453 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:37.322233+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:37.322648+0000 mon.a (mon.0) 455 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:37.322711+0000 mon.b (mon.2) 13 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/122506048,v1:192.168.123.111:6809/122506048]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:37.639548+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:37.640811+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:38.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:37 vm11 bash[17885]: audit 2026-03-09T14:31:37.641200+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:36.670698+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:36.681597+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:36.682239+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:36.682631+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: cluster 2026-03-09T14:31:36.972468+0000 mgr.y (mgr.14152) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:37.321790+0000 mon.a (mon.0) 452 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: cluster 2026-03-09T14:31:37.321860+0000 mon.a (mon.0) 453 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:37.322233+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:37.322648+0000 mon.a (mon.0) 455 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:37.322711+0000 mon.b (mon.2) 13 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/122506048,v1:192.168.123.111:6809/122506048]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:37.639548+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:37.640811+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:37 vm07 bash[22585]: audit 2026-03-09T14:31:37.641200+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:36.670698+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:36.681597+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:36.682239+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:36.682631+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: cluster 2026-03-09T14:31:36.972468+0000 mgr.y (mgr.14152) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 38 KiB/s, 0 objects/s recovering 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:37.321790+0000 mon.a (mon.0) 452 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: cluster 2026-03-09T14:31:37.321860+0000 mon.a (mon.0) 453 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:37.322233+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:37.322648+0000 mon.a (mon.0) 455 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:37.322711+0000 mon.b (mon.2) 13 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/122506048,v1:192.168.123.111:6809/122506048]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:37.639548+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:37.640811+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:38.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:37 vm07 bash[17480]: audit 2026-03-09T14:31:37.641200+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:38.511 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:31:38 vm11 bash[23966]: debug 2026-03-09T14:31:38.326+0000 7f8d0f288700 -1 osd.5 0 waiting for initial osdmap 2026-03-09T14:31:38.511 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:31:38 vm11 bash[23966]: debug 2026-03-09T14:31:38.330+0000 7f8d0bc23700 -1 osd.5 35 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:31:39.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:39 vm07 bash[22585]: audit 2026-03-09T14:31:37.638311+0000 mgr.y (mgr.14152) 105 : audit [DBG] from='client.24233 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:39.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:39 vm07 bash[22585]: audit 2026-03-09T14:31:38.325532+0000 mon.a (mon.0) 459 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:31:39.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:39 vm07 bash[22585]: cluster 2026-03-09T14:31:38.325604+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T14:31:39.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:39 vm07 bash[22585]: audit 2026-03-09T14:31:38.326266+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:39.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:39 vm07 bash[22585]: audit 2026-03-09T14:31:38.328209+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:39.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:39 vm07 bash[17480]: audit 2026-03-09T14:31:37.638311+0000 mgr.y (mgr.14152) 105 : audit [DBG] from='client.24233 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:39.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:39 vm07 bash[17480]: audit 2026-03-09T14:31:38.325532+0000 mon.a (mon.0) 459 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:31:39.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:39 vm07 bash[17480]: cluster 2026-03-09T14:31:38.325604+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T14:31:39.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:39 vm07 bash[17480]: audit 2026-03-09T14:31:38.326266+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:39.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:39 vm07 bash[17480]: audit 2026-03-09T14:31:38.328209+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:39.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:39 vm11 bash[17885]: audit 2026-03-09T14:31:37.638311+0000 mgr.y (mgr.14152) 105 : audit [DBG] from='client.24233 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:39.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:39 vm11 bash[17885]: audit 2026-03-09T14:31:38.325532+0000 mon.a (mon.0) 459 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:31:39.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:39 vm11 bash[17885]: cluster 2026-03-09T14:31:38.325604+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-09T14:31:39.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:39 vm11 bash[17885]: audit 2026-03-09T14:31:38.326266+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:39.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:39 vm11 bash[17885]: audit 2026-03-09T14:31:38.328209+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:40.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: cluster 2026-03-09T14:31:37.533567+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: cluster 2026-03-09T14:31:37.533649+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: cluster 2026-03-09T14:31:38.972737+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: audit 2026-03-09T14:31:39.328089+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: cluster 2026-03-09T14:31:39.336177+0000 mon.a (mon.0) 464 : cluster [INF] osd.5 [v2:192.168.123.111:6808/122506048,v1:192.168.123.111:6809/122506048] boot 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: cluster 2026-03-09T14:31:39.336254+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: audit 2026-03-09T14:31:39.339257+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: audit 2026-03-09T14:31:39.863446+0000 mon.b (mon.2) 14 : audit [INF] from='client.? 192.168.123.111:0/4169766944' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "77a63107-dca7-4e61-85ab-633ea82bcb7d"}]: dispatch 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: audit 2026-03-09T14:31:39.863518+0000 mon.a (mon.0) 467 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "77a63107-dca7-4e61-85ab-633ea82bcb7d"}]: dispatch 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: audit 2026-03-09T14:31:39.870716+0000 mon.a (mon.0) 468 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "77a63107-dca7-4e61-85ab-633ea82bcb7d"}]': finished 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: cluster 2026-03-09T14:31:39.871139+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e37: 7 total, 6 up, 7 in 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:40 vm07 bash[22585]: audit 2026-03-09T14:31:39.871283+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: cluster 2026-03-09T14:31:37.533567+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: cluster 2026-03-09T14:31:37.533649+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: cluster 2026-03-09T14:31:38.972737+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: audit 2026-03-09T14:31:39.328089+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: cluster 2026-03-09T14:31:39.336177+0000 mon.a (mon.0) 464 : cluster [INF] osd.5 [v2:192.168.123.111:6808/122506048,v1:192.168.123.111:6809/122506048] boot 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: cluster 2026-03-09T14:31:39.336254+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: audit 2026-03-09T14:31:39.339257+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: audit 2026-03-09T14:31:39.863446+0000 mon.b (mon.2) 14 : audit [INF] from='client.? 192.168.123.111:0/4169766944' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "77a63107-dca7-4e61-85ab-633ea82bcb7d"}]: dispatch 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: audit 2026-03-09T14:31:39.863518+0000 mon.a (mon.0) 467 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "77a63107-dca7-4e61-85ab-633ea82bcb7d"}]: dispatch 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: audit 2026-03-09T14:31:39.870716+0000 mon.a (mon.0) 468 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "77a63107-dca7-4e61-85ab-633ea82bcb7d"}]': finished 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: cluster 2026-03-09T14:31:39.871139+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e37: 7 total, 6 up, 7 in 2026-03-09T14:31:40.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:40 vm07 bash[17480]: audit 2026-03-09T14:31:39.871283+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: cluster 2026-03-09T14:31:37.533567+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: cluster 2026-03-09T14:31:37.533649+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: cluster 2026-03-09T14:31:38.972737+0000 mgr.y (mgr.14152) 106 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: audit 2026-03-09T14:31:39.328089+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: cluster 2026-03-09T14:31:39.336177+0000 mon.a (mon.0) 464 : cluster [INF] osd.5 [v2:192.168.123.111:6808/122506048,v1:192.168.123.111:6809/122506048] boot 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: cluster 2026-03-09T14:31:39.336254+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: audit 2026-03-09T14:31:39.339257+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: audit 2026-03-09T14:31:39.863446+0000 mon.b (mon.2) 14 : audit [INF] from='client.? 192.168.123.111:0/4169766944' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "77a63107-dca7-4e61-85ab-633ea82bcb7d"}]: dispatch 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: audit 2026-03-09T14:31:39.863518+0000 mon.a (mon.0) 467 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "77a63107-dca7-4e61-85ab-633ea82bcb7d"}]: dispatch 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: audit 2026-03-09T14:31:39.870716+0000 mon.a (mon.0) 468 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "77a63107-dca7-4e61-85ab-633ea82bcb7d"}]': finished 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: cluster 2026-03-09T14:31:39.871139+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e37: 7 total, 6 up, 7 in 2026-03-09T14:31:40.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:40 vm11 bash[17885]: audit 2026-03-09T14:31:39.871283+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:41.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:41 vm07 bash[22585]: audit 2026-03-09T14:31:40.533331+0000 mon.b (mon.2) 15 : audit [DBG] from='client.? 192.168.123.111:0/1591351837' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:41.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:41 vm07 bash[22585]: cluster 2026-03-09T14:31:40.876089+0000 mon.a (mon.0) 471 : cluster [DBG] osdmap e38: 7 total, 6 up, 7 in 2026-03-09T14:31:41.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:41 vm07 bash[22585]: audit 2026-03-09T14:31:40.876486+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:41.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:41 vm07 bash[17480]: audit 2026-03-09T14:31:40.533331+0000 mon.b (mon.2) 15 : audit [DBG] from='client.? 192.168.123.111:0/1591351837' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:41.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:41 vm07 bash[17480]: cluster 2026-03-09T14:31:40.876089+0000 mon.a (mon.0) 471 : cluster [DBG] osdmap e38: 7 total, 6 up, 7 in 2026-03-09T14:31:41.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:41 vm07 bash[17480]: audit 2026-03-09T14:31:40.876486+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:41.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:41 vm11 bash[17885]: audit 2026-03-09T14:31:40.533331+0000 mon.b (mon.2) 15 : audit [DBG] from='client.? 192.168.123.111:0/1591351837' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:41.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:41 vm11 bash[17885]: cluster 2026-03-09T14:31:40.876089+0000 mon.a (mon.0) 471 : cluster [DBG] osdmap e38: 7 total, 6 up, 7 in 2026-03-09T14:31:41.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:41 vm11 bash[17885]: audit 2026-03-09T14:31:40.876486+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:42.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:42 vm07 bash[22585]: cluster 2026-03-09T14:31:40.972980+0000 mgr.y (mgr.14152) 107 : cluster [DBG] pgmap v87: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:42.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:42 vm07 bash[22585]: audit 2026-03-09T14:31:42.324292+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:31:42.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:42 vm07 bash[22585]: audit 2026-03-09T14:31:42.327490+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:31:42.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:42 vm07 bash[17480]: cluster 2026-03-09T14:31:40.972980+0000 mgr.y (mgr.14152) 107 : cluster [DBG] pgmap v87: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:42.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:42 vm07 bash[17480]: audit 2026-03-09T14:31:42.324292+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:31:42.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:42 vm07 bash[17480]: audit 2026-03-09T14:31:42.327490+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:31:42.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:42 vm11 bash[17885]: cluster 2026-03-09T14:31:40.972980+0000 mgr.y (mgr.14152) 107 : cluster [DBG] pgmap v87: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:42.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:42 vm11 bash[17885]: audit 2026-03-09T14:31:42.324292+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:31:42.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:42 vm11 bash[17885]: audit 2026-03-09T14:31:42.327490+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:31:44.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:44 vm07 bash[22585]: cluster 2026-03-09T14:31:42.973280+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v88: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:44.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:44 vm07 bash[17480]: cluster 2026-03-09T14:31:42.973280+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v88: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:44.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:44 vm11 bash[17885]: cluster 2026-03-09T14:31:42.973280+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v88: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:45.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:45 vm07 bash[22585]: cephadm 2026-03-09T14:31:44.570496+0000 mgr.y (mgr.14152) 109 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:45 vm07 bash[22585]: audit 2026-03-09T14:31:44.576754+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:45 vm07 bash[22585]: audit 2026-03-09T14:31:44.577727+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:45 vm07 bash[22585]: audit 2026-03-09T14:31:44.578634+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:45 vm07 bash[22585]: cephadm 2026-03-09T14:31:44.578984+0000 mgr.y (mgr.14152) 110 : cephadm [INF] Adjusting osd_memory_target on vm11 to 227.8M 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:45 vm07 bash[22585]: cephadm 2026-03-09T14:31:44.579438+0000 mgr.y (mgr.14152) 111 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 238959411: error parsing value: Value '238959411' is below minimum 939524096 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:45 vm07 bash[22585]: audit 2026-03-09T14:31:44.582911+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:45 vm07 bash[17480]: cephadm 2026-03-09T14:31:44.570496+0000 mgr.y (mgr.14152) 109 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:45 vm07 bash[17480]: audit 2026-03-09T14:31:44.576754+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:45 vm07 bash[17480]: audit 2026-03-09T14:31:44.577727+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:45 vm07 bash[17480]: audit 2026-03-09T14:31:44.578634+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:45 vm07 bash[17480]: cephadm 2026-03-09T14:31:44.578984+0000 mgr.y (mgr.14152) 110 : cephadm [INF] Adjusting osd_memory_target on vm11 to 227.8M 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:45 vm07 bash[17480]: cephadm 2026-03-09T14:31:44.579438+0000 mgr.y (mgr.14152) 111 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 238959411: error parsing value: Value '238959411' is below minimum 939524096 2026-03-09T14:31:45.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:45 vm07 bash[17480]: audit 2026-03-09T14:31:44.582911+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:46.012 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:45 vm11 bash[17885]: cephadm 2026-03-09T14:31:44.570496+0000 mgr.y (mgr.14152) 109 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:31:46.012 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:45 vm11 bash[17885]: audit 2026-03-09T14:31:44.576754+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:46.012 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:45 vm11 bash[17885]: audit 2026-03-09T14:31:44.577727+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:46.012 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:45 vm11 bash[17885]: audit 2026-03-09T14:31:44.578634+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:46.012 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:45 vm11 bash[17885]: cephadm 2026-03-09T14:31:44.578984+0000 mgr.y (mgr.14152) 110 : cephadm [INF] Adjusting osd_memory_target on vm11 to 227.8M 2026-03-09T14:31:46.012 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:45 vm11 bash[17885]: cephadm 2026-03-09T14:31:44.579438+0000 mgr.y (mgr.14152) 111 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 238959411: error parsing value: Value '238959411' is below minimum 939524096 2026-03-09T14:31:46.012 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:45 vm11 bash[17885]: audit 2026-03-09T14:31:44.582911+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:46.865 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:46 vm11 bash[17885]: cluster 2026-03-09T14:31:44.973553+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v89: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:46.866 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:46 vm11 bash[17885]: audit 2026-03-09T14:31:46.072012+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:31:46.866 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:46 vm11 bash[17885]: audit 2026-03-09T14:31:46.072493+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:46.866 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:46.866 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:31:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:46.866 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:31:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:46.866 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:31:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:46.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:46 vm07 bash[22585]: cluster 2026-03-09T14:31:44.973553+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v89: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:46.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:46 vm07 bash[22585]: audit 2026-03-09T14:31:46.072012+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:31:46.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:46 vm07 bash[22585]: audit 2026-03-09T14:31:46.072493+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:46.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:46 vm07 bash[17480]: cluster 2026-03-09T14:31:44.973553+0000 mgr.y (mgr.14152) 112 : cluster [DBG] pgmap v89: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:46.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:46 vm07 bash[17480]: audit 2026-03-09T14:31:46.072012+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:31:46.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:46 vm07 bash[17480]: audit 2026-03-09T14:31:46.072493+0000 mon.a (mon.0) 480 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:47.261 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:31:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:47.261 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:31:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:47.261 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:31:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:47.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:31:47.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:47 vm11 bash[17885]: cephadm 2026-03-09T14:31:46.072858+0000 mgr.y (mgr.14152) 113 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-09T14:31:47.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:47 vm11 bash[17885]: audit 2026-03-09T14:31:46.948740+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:47.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:47 vm11 bash[17885]: audit 2026-03-09T14:31:46.950809+0000 mon.a (mon.0) 482 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:47.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:47 vm11 bash[17885]: audit 2026-03-09T14:31:46.951668+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:47.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:47 vm11 bash[17885]: audit 2026-03-09T14:31:46.952348+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:47.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:47 vm07 bash[17480]: cephadm 2026-03-09T14:31:46.072858+0000 mgr.y (mgr.14152) 113 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-09T14:31:47.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:47 vm07 bash[17480]: audit 2026-03-09T14:31:46.948740+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:47.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:47 vm07 bash[17480]: audit 2026-03-09T14:31:46.950809+0000 mon.a (mon.0) 482 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:47.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:47 vm07 bash[17480]: audit 2026-03-09T14:31:46.951668+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:47.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:47 vm07 bash[17480]: audit 2026-03-09T14:31:46.952348+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:47.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:47 vm07 bash[22585]: cephadm 2026-03-09T14:31:46.072858+0000 mgr.y (mgr.14152) 113 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-09T14:31:47.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:47 vm07 bash[22585]: audit 2026-03-09T14:31:46.948740+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:47.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:47 vm07 bash[22585]: audit 2026-03-09T14:31:46.950809+0000 mon.a (mon.0) 482 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:47.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:47 vm07 bash[22585]: audit 2026-03-09T14:31:46.951668+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:47.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:47 vm07 bash[22585]: audit 2026-03-09T14:31:46.952348+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:48.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:48 vm07 bash[22585]: cluster 2026-03-09T14:31:46.973984+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:48.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:48 vm07 bash[17480]: cluster 2026-03-09T14:31:46.973984+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:49.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:48 vm11 bash[17885]: cluster 2026-03-09T14:31:46.973984+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:50.376 INFO:teuthology.orchestra.run.vm11.stdout:Created osd(s) 6 on host 'vm11' 2026-03-09T14:31:50.438 DEBUG:teuthology.orchestra.run.vm11:osd.6> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.6.service 2026-03-09T14:31:50.439 INFO:tasks.cephadm:Deploying osd.7 on vm11 with /dev/vdb... 2026-03-09T14:31:50.439 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- lvm zap /dev/vdb 2026-03-09T14:31:50.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:50 vm11 bash[17885]: cluster 2026-03-09T14:31:48.974321+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:50.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:50 vm11 bash[17885]: audit 2026-03-09T14:31:49.918175+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:50.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:50 vm11 bash[17885]: audit 2026-03-09T14:31:49.924803+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:50.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:50 vm11 bash[17885]: audit 2026-03-09T14:31:50.177079+0000 mon.b (mon.2) 16 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/615402579,v1:192.168.123.111:6817/615402579]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:31:50.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:50 vm11 bash[17885]: audit 2026-03-09T14:31:50.177166+0000 mon.a (mon.0) 487 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:31:50.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:50 vm11 bash[17885]: audit 2026-03-09T14:31:50.370631+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:50.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:50 vm11 bash[17885]: audit 2026-03-09T14:31:50.391114+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:50.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:50 vm11 bash[17885]: audit 2026-03-09T14:31:50.392097+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:50.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:50 vm11 bash[17885]: audit 2026-03-09T14:31:50.392575+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:50 vm07 bash[22585]: cluster 2026-03-09T14:31:48.974321+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:50 vm07 bash[22585]: audit 2026-03-09T14:31:49.918175+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:50 vm07 bash[22585]: audit 2026-03-09T14:31:49.924803+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:50 vm07 bash[22585]: audit 2026-03-09T14:31:50.177079+0000 mon.b (mon.2) 16 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/615402579,v1:192.168.123.111:6817/615402579]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:50 vm07 bash[22585]: audit 2026-03-09T14:31:50.177166+0000 mon.a (mon.0) 487 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:50 vm07 bash[22585]: audit 2026-03-09T14:31:50.370631+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:50 vm07 bash[22585]: audit 2026-03-09T14:31:50.391114+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:50 vm07 bash[22585]: audit 2026-03-09T14:31:50.392097+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:50 vm07 bash[22585]: audit 2026-03-09T14:31:50.392575+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:50 vm07 bash[17480]: cluster 2026-03-09T14:31:48.974321+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:50 vm07 bash[17480]: audit 2026-03-09T14:31:49.918175+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:50 vm07 bash[17480]: audit 2026-03-09T14:31:49.924803+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:50 vm07 bash[17480]: audit 2026-03-09T14:31:50.177079+0000 mon.b (mon.2) 16 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/615402579,v1:192.168.123.111:6817/615402579]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:50 vm07 bash[17480]: audit 2026-03-09T14:31:50.177166+0000 mon.a (mon.0) 487 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:50 vm07 bash[17480]: audit 2026-03-09T14:31:50.370631+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:50 vm07 bash[17480]: audit 2026-03-09T14:31:50.391114+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:50 vm07 bash[17480]: audit 2026-03-09T14:31:50.392097+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:50.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:50 vm07 bash[17480]: audit 2026-03-09T14:31:50.392575+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:31:51.092 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-09T14:31:51.105 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch daemon add osd vm11:/dev/vdb 2026-03-09T14:31:52.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:51 vm11 bash[17885]: audit 2026-03-09T14:31:50.932369+0000 mon.a (mon.0) 492 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:31:52.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:51 vm11 bash[17885]: cluster 2026-03-09T14:31:50.932658+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T14:31:52.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:51 vm11 bash[17885]: audit 2026-03-09T14:31:50.932940+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:52.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:51 vm11 bash[17885]: audit 2026-03-09T14:31:50.933708+0000 mon.b (mon.2) 17 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/615402579,v1:192.168.123.111:6817/615402579]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:52.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:51 vm11 bash[17885]: audit 2026-03-09T14:31:50.933755+0000 mon.a (mon.0) 495 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:52.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:51 vm11 bash[17885]: cluster 2026-03-09T14:31:50.974658+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:52.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:51 vm11 bash[17885]: audit 2026-03-09T14:31:51.546606+0000 mgr.y (mgr.14152) 117 : audit [DBG] from='client.24260 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:52.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:51 vm11 bash[17885]: audit 2026-03-09T14:31:51.548138+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:52.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:51 vm11 bash[17885]: audit 2026-03-09T14:31:51.549431+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:52.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:51 vm11 bash[17885]: audit 2026-03-09T14:31:51.549873+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:52.262 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:31:51 vm11 bash[27120]: debug 2026-03-09T14:31:51.938+0000 7f07f87bf700 -1 osd.6 0 waiting for initial osdmap 2026-03-09T14:31:52.262 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:31:51 vm11 bash[27120]: debug 2026-03-09T14:31:51.946+0000 7f07f3156700 -1 osd.6 40 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:51 vm07 bash[22585]: audit 2026-03-09T14:31:50.932369+0000 mon.a (mon.0) 492 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:51 vm07 bash[22585]: cluster 2026-03-09T14:31:50.932658+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:51 vm07 bash[22585]: audit 2026-03-09T14:31:50.932940+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:51 vm07 bash[22585]: audit 2026-03-09T14:31:50.933708+0000 mon.b (mon.2) 17 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/615402579,v1:192.168.123.111:6817/615402579]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:51 vm07 bash[22585]: audit 2026-03-09T14:31:50.933755+0000 mon.a (mon.0) 495 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:51 vm07 bash[22585]: cluster 2026-03-09T14:31:50.974658+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:51 vm07 bash[22585]: audit 2026-03-09T14:31:51.546606+0000 mgr.y (mgr.14152) 117 : audit [DBG] from='client.24260 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:51 vm07 bash[22585]: audit 2026-03-09T14:31:51.548138+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:51 vm07 bash[22585]: audit 2026-03-09T14:31:51.549431+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:51 vm07 bash[22585]: audit 2026-03-09T14:31:51.549873+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:51 vm07 bash[17480]: audit 2026-03-09T14:31:50.932369+0000 mon.a (mon.0) 492 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:51 vm07 bash[17480]: cluster 2026-03-09T14:31:50.932658+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:51 vm07 bash[17480]: audit 2026-03-09T14:31:50.932940+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:51 vm07 bash[17480]: audit 2026-03-09T14:31:50.933708+0000 mon.b (mon.2) 17 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/615402579,v1:192.168.123.111:6817/615402579]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:51 vm07 bash[17480]: audit 2026-03-09T14:31:50.933755+0000 mon.a (mon.0) 495 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:51 vm07 bash[17480]: cluster 2026-03-09T14:31:50.974658+0000 mgr.y (mgr.14152) 116 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:51 vm07 bash[17480]: audit 2026-03-09T14:31:51.546606+0000 mgr.y (mgr.14152) 117 : audit [DBG] from='client.24260 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:51 vm07 bash[17480]: audit 2026-03-09T14:31:51.548138+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:51 vm07 bash[17480]: audit 2026-03-09T14:31:51.549431+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:31:52.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:51 vm07 bash[17480]: audit 2026-03-09T14:31:51.549873+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:31:53.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:52 vm11 bash[17885]: audit 2026-03-09T14:31:51.936808+0000 mon.a (mon.0) 499 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:31:53.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:52 vm11 bash[17885]: cluster 2026-03-09T14:31:51.937141+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T14:31:53.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:52 vm11 bash[17885]: audit 2026-03-09T14:31:51.938535+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:53.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:52 vm11 bash[17885]: audit 2026-03-09T14:31:51.940401+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:53.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:52 vm07 bash[22585]: audit 2026-03-09T14:31:51.936808+0000 mon.a (mon.0) 499 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:31:53.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:52 vm07 bash[22585]: cluster 2026-03-09T14:31:51.937141+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T14:31:53.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:52 vm07 bash[22585]: audit 2026-03-09T14:31:51.938535+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:53.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:52 vm07 bash[22585]: audit 2026-03-09T14:31:51.940401+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:53.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:52 vm07 bash[17480]: audit 2026-03-09T14:31:51.936808+0000 mon.a (mon.0) 499 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:31:53.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:52 vm07 bash[17480]: cluster 2026-03-09T14:31:51.937141+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-09T14:31:53.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:52 vm07 bash[17480]: audit 2026-03-09T14:31:51.938535+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:53.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:52 vm07 bash[17480]: audit 2026-03-09T14:31:51.940401+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:54.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:53 vm11 bash[17885]: cluster 2026-03-09T14:31:51.207396+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:54.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:53 vm11 bash[17885]: cluster 2026-03-09T14:31:51.207507+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:54.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:53 vm11 bash[17885]: cluster 2026-03-09T14:31:52.941253+0000 mon.a (mon.0) 503 : cluster [INF] osd.6 [v2:192.168.123.111:6816/615402579,v1:192.168.123.111:6817/615402579] boot 2026-03-09T14:31:54.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:53 vm11 bash[17885]: cluster 2026-03-09T14:31:52.941362+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e41: 7 total, 7 up, 7 in 2026-03-09T14:31:54.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:53 vm11 bash[17885]: audit 2026-03-09T14:31:52.944393+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:54.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:53 vm11 bash[17885]: cluster 2026-03-09T14:31:52.974955+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:54.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:53 vm11 bash[17885]: cluster 2026-03-09T14:31:53.943614+0000 mon.a (mon.0) 506 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:53 vm07 bash[22585]: cluster 2026-03-09T14:31:51.207396+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:53 vm07 bash[22585]: cluster 2026-03-09T14:31:51.207507+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:53 vm07 bash[22585]: cluster 2026-03-09T14:31:52.941253+0000 mon.a (mon.0) 503 : cluster [INF] osd.6 [v2:192.168.123.111:6816/615402579,v1:192.168.123.111:6817/615402579] boot 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:53 vm07 bash[22585]: cluster 2026-03-09T14:31:52.941362+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e41: 7 total, 7 up, 7 in 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:53 vm07 bash[22585]: audit 2026-03-09T14:31:52.944393+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:53 vm07 bash[22585]: cluster 2026-03-09T14:31:52.974955+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:53 vm07 bash[22585]: cluster 2026-03-09T14:31:53.943614+0000 mon.a (mon.0) 506 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:53 vm07 bash[17480]: cluster 2026-03-09T14:31:51.207396+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:53 vm07 bash[17480]: cluster 2026-03-09T14:31:51.207507+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:53 vm07 bash[17480]: cluster 2026-03-09T14:31:52.941253+0000 mon.a (mon.0) 503 : cluster [INF] osd.6 [v2:192.168.123.111:6816/615402579,v1:192.168.123.111:6817/615402579] boot 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:53 vm07 bash[17480]: cluster 2026-03-09T14:31:52.941362+0000 mon.a (mon.0) 504 : cluster [DBG] osdmap e41: 7 total, 7 up, 7 in 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:53 vm07 bash[17480]: audit 2026-03-09T14:31:52.944393+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:53 vm07 bash[17480]: cluster 2026-03-09T14:31:52.974955+0000 mgr.y (mgr.14152) 118 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:31:54.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:53 vm07 bash[17480]: cluster 2026-03-09T14:31:53.943614+0000 mon.a (mon.0) 506 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: audit 2026-03-09T14:31:54.833643+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: audit 2026-03-09T14:31:54.834623+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: audit 2026-03-09T14:31:54.835094+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: audit 2026-03-09T14:31:54.835489+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: cephadm 2026-03-09T14:31:54.835794+0000 mgr.y (mgr.14152) 119 : cephadm [INF] Adjusting osd_memory_target on vm11 to 151.9M 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: cephadm 2026-03-09T14:31:54.836178+0000 mgr.y (mgr.14152) 120 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 159306274: error parsing value: Value '159306274' is below minimum 939524096 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: audit 2026-03-09T14:31:54.842321+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: cluster 2026-03-09T14:31:54.946984+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: cluster 2026-03-09T14:31:54.975218+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: audit 2026-03-09T14:31:55.753230+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.111:0/2430161316' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "abdf6bc5-5826-4388-bb2b-2d627c14c61b"}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: audit 2026-03-09T14:31:55.753453+0000 mon.a (mon.0) 513 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "abdf6bc5-5826-4388-bb2b-2d627c14c61b"}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: audit 2026-03-09T14:31:55.761142+0000 mon.a (mon.0) 514 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "abdf6bc5-5826-4388-bb2b-2d627c14c61b"}]': finished 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: cluster 2026-03-09T14:31:55.761186+0000 mon.a (mon.0) 515 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:55 vm07 bash[22585]: audit 2026-03-09T14:31:55.761253+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: audit 2026-03-09T14:31:54.833643+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: audit 2026-03-09T14:31:54.834623+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: audit 2026-03-09T14:31:54.835094+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: audit 2026-03-09T14:31:54.835489+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: cephadm 2026-03-09T14:31:54.835794+0000 mgr.y (mgr.14152) 119 : cephadm [INF] Adjusting osd_memory_target on vm11 to 151.9M 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: cephadm 2026-03-09T14:31:54.836178+0000 mgr.y (mgr.14152) 120 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 159306274: error parsing value: Value '159306274' is below minimum 939524096 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: audit 2026-03-09T14:31:54.842321+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: cluster 2026-03-09T14:31:54.946984+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: cluster 2026-03-09T14:31:54.975218+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: audit 2026-03-09T14:31:55.753230+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.111:0/2430161316' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "abdf6bc5-5826-4388-bb2b-2d627c14c61b"}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: audit 2026-03-09T14:31:55.753453+0000 mon.a (mon.0) 513 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "abdf6bc5-5826-4388-bb2b-2d627c14c61b"}]: dispatch 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: audit 2026-03-09T14:31:55.761142+0000 mon.a (mon.0) 514 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "abdf6bc5-5826-4388-bb2b-2d627c14c61b"}]': finished 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: cluster 2026-03-09T14:31:55.761186+0000 mon.a (mon.0) 515 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-09T14:31:56.168 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:55 vm07 bash[17480]: audit 2026-03-09T14:31:55.761253+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:31:56.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: audit 2026-03-09T14:31:54.833643+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:56.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: audit 2026-03-09T14:31:54.834623+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:56.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: audit 2026-03-09T14:31:54.835094+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:56.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: audit 2026-03-09T14:31:54.835489+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:31:56.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: cephadm 2026-03-09T14:31:54.835794+0000 mgr.y (mgr.14152) 119 : cephadm [INF] Adjusting osd_memory_target on vm11 to 151.9M 2026-03-09T14:31:56.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: cephadm 2026-03-09T14:31:54.836178+0000 mgr.y (mgr.14152) 120 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 159306274: error parsing value: Value '159306274' is below minimum 939524096 2026-03-09T14:31:56.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: audit 2026-03-09T14:31:54.842321+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:31:56.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: cluster 2026-03-09T14:31:54.946984+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-09T14:31:56.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: cluster 2026-03-09T14:31:54.975218+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:31:56.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: audit 2026-03-09T14:31:55.753230+0000 mon.b (mon.2) 18 : audit [INF] from='client.? 192.168.123.111:0/2430161316' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "abdf6bc5-5826-4388-bb2b-2d627c14c61b"}]: dispatch 2026-03-09T14:31:56.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: audit 2026-03-09T14:31:55.753453+0000 mon.a (mon.0) 513 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "abdf6bc5-5826-4388-bb2b-2d627c14c61b"}]: dispatch 2026-03-09T14:31:56.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: audit 2026-03-09T14:31:55.761142+0000 mon.a (mon.0) 514 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "abdf6bc5-5826-4388-bb2b-2d627c14c61b"}]': finished 2026-03-09T14:31:56.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: cluster 2026-03-09T14:31:55.761186+0000 mon.a (mon.0) 515 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-09T14:31:56.262 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:55 vm11 bash[17885]: audit 2026-03-09T14:31:55.761253+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:31:57.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:56 vm11 bash[17885]: audit 2026-03-09T14:31:56.485447+0000 mon.b (mon.2) 19 : audit [DBG] from='client.? 192.168.123.111:0/2512553608' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:57.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:56 vm07 bash[22585]: audit 2026-03-09T14:31:56.485447+0000 mon.b (mon.2) 19 : audit [DBG] from='client.? 192.168.123.111:0/2512553608' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:57.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:56 vm07 bash[17480]: audit 2026-03-09T14:31:56.485447+0000 mon.b (mon.2) 19 : audit [DBG] from='client.? 192.168.123.111:0/2512553608' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:31:58.261 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:31:57 vm11 bash[17885]: cluster 2026-03-09T14:31:56.975510+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:31:58.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:31:57 vm07 bash[22585]: cluster 2026-03-09T14:31:56.975510+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:31:58.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:31:57 vm07 bash[17480]: cluster 2026-03-09T14:31:56.975510+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:00.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:00 vm07 bash[22585]: cluster 2026-03-09T14:31:58.975801+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v102: 1 pgs: 1 active+recovering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:00.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:00 vm07 bash[17480]: cluster 2026-03-09T14:31:58.975801+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v102: 1 pgs: 1 active+recovering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:00.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:00 vm11 bash[17885]: cluster 2026-03-09T14:31:58.975801+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v102: 1 pgs: 1 active+recovering; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:02.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:02 vm11 bash[17885]: cluster 2026-03-09T14:32:00.976051+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v103: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:02.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:02 vm11 bash[17885]: audit 2026-03-09T14:32:02.273663+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:32:02.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:02 vm11 bash[17885]: audit 2026-03-09T14:32:02.274275+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:02.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:02 vm07 bash[22585]: cluster 2026-03-09T14:32:00.976051+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v103: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:02.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:02 vm07 bash[22585]: audit 2026-03-09T14:32:02.273663+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:32:02.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:02 vm07 bash[22585]: audit 2026-03-09T14:32:02.274275+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:02.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:02 vm07 bash[17480]: cluster 2026-03-09T14:32:00.976051+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v103: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:02.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:02 vm07 bash[17480]: audit 2026-03-09T14:32:02.273663+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:32:02.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:02 vm07 bash[17480]: audit 2026-03-09T14:32:02.274275+0000 mon.a (mon.0) 518 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:03.011 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:02 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:03.011 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:32:02 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:03.011 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:32:02 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:03.011 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:32:02 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:03.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:02 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:03.511 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:32:03 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:03.511 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:32:03 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:03.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:03 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:03.511 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:03 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:03.511 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:32:03 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:03.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:03 vm07 bash[22585]: cephadm 2026-03-09T14:32:02.274693+0000 mgr.y (mgr.14152) 125 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-09T14:32:03.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:03 vm07 bash[22585]: audit 2026-03-09T14:32:03.141189+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:03.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:03 vm07 bash[22585]: audit 2026-03-09T14:32:03.169856+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:03.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:03 vm07 bash[22585]: audit 2026-03-09T14:32:03.170675+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:03.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:03 vm07 bash[22585]: audit 2026-03-09T14:32:03.171095+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:03.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:03 vm07 bash[17480]: cephadm 2026-03-09T14:32:02.274693+0000 mgr.y (mgr.14152) 125 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-09T14:32:03.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:03 vm07 bash[17480]: audit 2026-03-09T14:32:03.141189+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:03.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:03 vm07 bash[17480]: audit 2026-03-09T14:32:03.169856+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:03.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:03 vm07 bash[17480]: audit 2026-03-09T14:32:03.170675+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:03.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:03 vm07 bash[17480]: audit 2026-03-09T14:32:03.171095+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:04.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:03 vm11 bash[17885]: cephadm 2026-03-09T14:32:02.274693+0000 mgr.y (mgr.14152) 125 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-09T14:32:04.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:03 vm11 bash[17885]: audit 2026-03-09T14:32:03.141189+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:04.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:03 vm11 bash[17885]: audit 2026-03-09T14:32:03.169856+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:04.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:03 vm11 bash[17885]: audit 2026-03-09T14:32:03.170675+0000 mon.a (mon.0) 521 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:04.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:03 vm11 bash[17885]: audit 2026-03-09T14:32:03.171095+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:04.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:04 vm07 bash[22585]: cluster 2026-03-09T14:32:02.976304+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v104: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:04.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:04 vm07 bash[17480]: cluster 2026-03-09T14:32:02.976304+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v104: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:05.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:04 vm11 bash[17885]: cluster 2026-03-09T14:32:02.976304+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v104: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:06.551 INFO:teuthology.orchestra.run.vm11.stdout:Created osd(s) 7 on host 'vm11' 2026-03-09T14:32:06.625 DEBUG:teuthology.orchestra.run.vm11:osd.7> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.7.service 2026-03-09T14:32:06.627 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-09T14:32:06.627 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd stat -f json 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:06 vm07 bash[22585]: cluster 2026-03-09T14:32:04.976562+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:06 vm07 bash[22585]: audit 2026-03-09T14:32:06.105741+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:06 vm07 bash[22585]: audit 2026-03-09T14:32:06.272199+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:06 vm07 bash[22585]: audit 2026-03-09T14:32:06.362604+0000 mon.a (mon.0) 525 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:06 vm07 bash[22585]: audit 2026-03-09T14:32:06.543964+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:06 vm07 bash[22585]: audit 2026-03-09T14:32:06.588017+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:06 vm07 bash[22585]: audit 2026-03-09T14:32:06.589301+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:06 vm07 bash[22585]: audit 2026-03-09T14:32:06.590146+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:06 vm07 bash[17480]: cluster 2026-03-09T14:32:04.976562+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:06 vm07 bash[17480]: audit 2026-03-09T14:32:06.105741+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:06 vm07 bash[17480]: audit 2026-03-09T14:32:06.272199+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:06 vm07 bash[17480]: audit 2026-03-09T14:32:06.362604+0000 mon.a (mon.0) 525 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:06 vm07 bash[17480]: audit 2026-03-09T14:32:06.543964+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:06 vm07 bash[17480]: audit 2026-03-09T14:32:06.588017+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:06 vm07 bash[17480]: audit 2026-03-09T14:32:06.589301+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:06.887 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:06 vm07 bash[17480]: audit 2026-03-09T14:32:06.590146+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:07.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:06 vm11 bash[17885]: cluster 2026-03-09T14:32:04.976562+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:07.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:06 vm11 bash[17885]: audit 2026-03-09T14:32:06.105741+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:07.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:06 vm11 bash[17885]: audit 2026-03-09T14:32:06.272199+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:07.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:06 vm11 bash[17885]: audit 2026-03-09T14:32:06.362604+0000 mon.a (mon.0) 525 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:32:07.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:06 vm11 bash[17885]: audit 2026-03-09T14:32:06.543964+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:07.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:06 vm11 bash[17885]: audit 2026-03-09T14:32:06.588017+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:07.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:06 vm11 bash[17885]: audit 2026-03-09T14:32:06.589301+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:07.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:06 vm11 bash[17885]: audit 2026-03-09T14:32:06.590146+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:07.061 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:32:07.112 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":44,"num_osds":8,"num_up_osds":7,"osd_up_since":1773066712,"num_in_osds":8,"osd_in_since":1773066715,"num_remapped_pgs":0} 2026-03-09T14:32:07.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:07 vm07 bash[22585]: audit 2026-03-09T14:32:07.056049+0000 mon.c (mon.1) 17 : audit [DBG] from='client.? 192.168.123.107:0/1849880849' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:32:07.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:07 vm07 bash[22585]: audit 2026-03-09T14:32:07.278529+0000 mon.a (mon.0) 530 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:32:07.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:07 vm07 bash[22585]: cluster 2026-03-09T14:32:07.278587+0000 mon.a (mon.0) 531 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T14:32:07.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:07 vm07 bash[22585]: audit 2026-03-09T14:32:07.279165+0000 mon.a (mon.0) 532 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:07.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:07 vm07 bash[22585]: audit 2026-03-09T14:32:07.280160+0000 mon.a (mon.0) 533 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:32:07.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:07 vm07 bash[17480]: audit 2026-03-09T14:32:07.056049+0000 mon.c (mon.1) 17 : audit [DBG] from='client.? 192.168.123.107:0/1849880849' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:32:07.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:07 vm07 bash[17480]: audit 2026-03-09T14:32:07.278529+0000 mon.a (mon.0) 530 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:32:07.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:07 vm07 bash[17480]: cluster 2026-03-09T14:32:07.278587+0000 mon.a (mon.0) 531 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T14:32:07.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:07 vm07 bash[17480]: audit 2026-03-09T14:32:07.279165+0000 mon.a (mon.0) 532 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:07.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:07 vm07 bash[17480]: audit 2026-03-09T14:32:07.280160+0000 mon.a (mon.0) 533 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:32:08.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:07 vm11 bash[17885]: audit 2026-03-09T14:32:07.056049+0000 mon.c (mon.1) 17 : audit [DBG] from='client.? 192.168.123.107:0/1849880849' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:32:08.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:07 vm11 bash[17885]: audit 2026-03-09T14:32:07.278529+0000 mon.a (mon.0) 530 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:32:08.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:07 vm11 bash[17885]: cluster 2026-03-09T14:32:07.278587+0000 mon.a (mon.0) 531 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-09T14:32:08.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:07 vm11 bash[17885]: audit 2026-03-09T14:32:07.279165+0000 mon.a (mon.0) 532 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:08.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:07 vm11 bash[17885]: audit 2026-03-09T14:32:07.280160+0000 mon.a (mon.0) 533 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:32:08.113 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd stat -f json 2026-03-09T14:32:08.566 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:32:08.617 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":46,"num_osds":8,"num_up_osds":7,"osd_up_since":1773066712,"num_in_osds":8,"osd_in_since":1773066715,"num_remapped_pgs":0} 2026-03-09T14:32:08.641 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:32:08 vm11 bash[30285]: debug 2026-03-09T14:32:08.282+0000 7fb5c55b6700 -1 osd.7 0 waiting for initial osdmap 2026-03-09T14:32:08.641 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:32:08 vm11 bash[30285]: debug 2026-03-09T14:32:08.294+0000 7fb5c1750700 -1 osd.7 46 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:08 vm07 bash[22585]: cluster 2026-03-09T14:32:06.976795+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:08 vm07 bash[22585]: audit 2026-03-09T14:32:08.280288+0000 mon.a (mon.0) 534 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:08 vm07 bash[22585]: cluster 2026-03-09T14:32:08.280381+0000 mon.a (mon.0) 535 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:08 vm07 bash[22585]: audit 2026-03-09T14:32:08.280446+0000 mon.a (mon.0) 536 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:08 vm07 bash[22585]: audit 2026-03-09T14:32:08.284777+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:08 vm07 bash[22585]: audit 2026-03-09T14:32:08.561162+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.107:0/2814866828' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:08 vm07 bash[17480]: cluster 2026-03-09T14:32:06.976795+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:08 vm07 bash[17480]: audit 2026-03-09T14:32:08.280288+0000 mon.a (mon.0) 534 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:08 vm07 bash[17480]: cluster 2026-03-09T14:32:08.280381+0000 mon.a (mon.0) 535 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:08 vm07 bash[17480]: audit 2026-03-09T14:32:08.280446+0000 mon.a (mon.0) 536 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:08 vm07 bash[17480]: audit 2026-03-09T14:32:08.284777+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:08.918 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:08 vm07 bash[17480]: audit 2026-03-09T14:32:08.561162+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.107:0/2814866828' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:32:09.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:08 vm11 bash[17885]: cluster 2026-03-09T14:32:06.976795+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:32:09.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:08 vm11 bash[17885]: audit 2026-03-09T14:32:08.280288+0000 mon.a (mon.0) 534 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-09T14:32:09.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:08 vm11 bash[17885]: cluster 2026-03-09T14:32:08.280381+0000 mon.a (mon.0) 535 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-09T14:32:09.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:08 vm11 bash[17885]: audit 2026-03-09T14:32:08.280446+0000 mon.a (mon.0) 536 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:09.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:08 vm11 bash[17885]: audit 2026-03-09T14:32:08.284777+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:09.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:08 vm11 bash[17885]: audit 2026-03-09T14:32:08.561162+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.107:0/2814866828' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:32:09.618 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd stat -f json 2026-03-09T14:32:10.067 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:32:10.121 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":47,"num_osds":8,"num_up_osds":8,"osd_up_since":1773066729,"num_in_osds":8,"osd_in_since":1773066715,"num_remapped_pgs":1} 2026-03-09T14:32:10.121 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd dump --format=json 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:10 vm07 bash[22585]: cluster 2026-03-09T14:32:07.305970+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:10 vm07 bash[22585]: cluster 2026-03-09T14:32:07.306061+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:10 vm07 bash[22585]: cluster 2026-03-09T14:32:08.977094+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:10 vm07 bash[22585]: cluster 2026-03-09T14:32:09.284881+0000 mon.a (mon.0) 539 : cluster [INF] osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271] boot 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:10 vm07 bash[22585]: cluster 2026-03-09T14:32:09.285006+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:10 vm07 bash[22585]: audit 2026-03-09T14:32:09.285277+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:10 vm07 bash[22585]: audit 2026-03-09T14:32:10.062874+0000 mon.a (mon.0) 542 : audit [DBG] from='client.? 192.168.123.107:0/331682347' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:10 vm07 bash[17480]: cluster 2026-03-09T14:32:07.305970+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:10 vm07 bash[17480]: cluster 2026-03-09T14:32:07.306061+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:10 vm07 bash[17480]: cluster 2026-03-09T14:32:08.977094+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:10 vm07 bash[17480]: cluster 2026-03-09T14:32:09.284881+0000 mon.a (mon.0) 539 : cluster [INF] osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271] boot 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:10 vm07 bash[17480]: cluster 2026-03-09T14:32:09.285006+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:10 vm07 bash[17480]: audit 2026-03-09T14:32:09.285277+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:10.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:10 vm07 bash[17480]: audit 2026-03-09T14:32:10.062874+0000 mon.a (mon.0) 542 : audit [DBG] from='client.? 192.168.123.107:0/331682347' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:32:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:10 vm11 bash[17885]: cluster 2026-03-09T14:32:07.305970+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T14:32:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:10 vm11 bash[17885]: cluster 2026-03-09T14:32:07.306061+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T14:32:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:10 vm11 bash[17885]: cluster 2026-03-09T14:32:08.977094+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-09T14:32:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:10 vm11 bash[17885]: cluster 2026-03-09T14:32:09.284881+0000 mon.a (mon.0) 539 : cluster [INF] osd.7 [v2:192.168.123.111:6824/2351382271,v1:192.168.123.111:6825/2351382271] boot 2026-03-09T14:32:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:10 vm11 bash[17885]: cluster 2026-03-09T14:32:09.285006+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-09T14:32:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:10 vm11 bash[17885]: audit 2026-03-09T14:32:09.285277+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:10.511 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:10 vm11 bash[17885]: audit 2026-03-09T14:32:10.062874+0000 mon.a (mon.0) 542 : audit [DBG] from='client.? 192.168.123.107:0/331682347' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:32:11.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:11 vm07 bash[22585]: cluster 2026-03-09T14:32:10.287206+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:11 vm07 bash[22585]: audit 2026-03-09T14:32:11.011773+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:11 vm07 bash[22585]: audit 2026-03-09T14:32:11.012548+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:11 vm07 bash[22585]: audit 2026-03-09T14:32:11.013101+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:11 vm07 bash[22585]: audit 2026-03-09T14:32:11.013588+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:11 vm07 bash[22585]: audit 2026-03-09T14:32:11.014088+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:11 vm07 bash[22585]: audit 2026-03-09T14:32:11.019886+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:11 vm07 bash[17480]: cluster 2026-03-09T14:32:10.287206+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:11 vm07 bash[17480]: audit 2026-03-09T14:32:11.011773+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:11 vm07 bash[17480]: audit 2026-03-09T14:32:11.012548+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:11 vm07 bash[17480]: audit 2026-03-09T14:32:11.013101+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:11 vm07 bash[17480]: audit 2026-03-09T14:32:11.013588+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:11 vm07 bash[17480]: audit 2026-03-09T14:32:11.014088+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:11 vm07 bash[17480]: audit 2026-03-09T14:32:11.019886+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:11.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:11 vm11 bash[17885]: cluster 2026-03-09T14:32:10.287206+0000 mon.a (mon.0) 543 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:32:11.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:11 vm11 bash[17885]: audit 2026-03-09T14:32:11.011773+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:11.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:11 vm11 bash[17885]: audit 2026-03-09T14:32:11.012548+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:11 vm11 bash[17885]: audit 2026-03-09T14:32:11.013101+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:11 vm11 bash[17885]: audit 2026-03-09T14:32:11.013588+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:11 vm11 bash[17885]: audit 2026-03-09T14:32:11.014088+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:11.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:11 vm11 bash[17885]: audit 2026-03-09T14:32:11.019886+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:12.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:12 vm07 bash[22585]: cluster 2026-03-09T14:32:10.977427+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v112: 1 pgs: 1 remapped+peering; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:12.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:12 vm07 bash[22585]: cephadm 2026-03-09T14:32:11.005002+0000 mgr.y (mgr.14152) 131 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:32:12.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:12 vm07 bash[22585]: cephadm 2026-03-09T14:32:11.014456+0000 mgr.y (mgr.14152) 132 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-09T14:32:12.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:12 vm07 bash[22585]: cephadm 2026-03-09T14:32:11.015071+0000 mgr.y (mgr.14152) 133 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-09T14:32:12.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:12 vm07 bash[22585]: cluster 2026-03-09T14:32:11.300186+0000 mon.a (mon.0) 550 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:32:12.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:12 vm07 bash[22585]: cluster 2026-03-09T14:32:11.605634+0000 mon.a (mon.0) 551 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T14:32:12.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:12 vm07 bash[17480]: cluster 2026-03-09T14:32:10.977427+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v112: 1 pgs: 1 remapped+peering; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:12.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:12 vm07 bash[17480]: cephadm 2026-03-09T14:32:11.005002+0000 mgr.y (mgr.14152) 131 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:32:12.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:12 vm07 bash[17480]: cephadm 2026-03-09T14:32:11.014456+0000 mgr.y (mgr.14152) 132 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-09T14:32:12.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:12 vm07 bash[17480]: cephadm 2026-03-09T14:32:11.015071+0000 mgr.y (mgr.14152) 133 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-09T14:32:12.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:12 vm07 bash[17480]: cluster 2026-03-09T14:32:11.300186+0000 mon.a (mon.0) 550 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:32:12.668 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:12 vm07 bash[17480]: cluster 2026-03-09T14:32:11.605634+0000 mon.a (mon.0) 551 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T14:32:12.727 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:12.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:12 vm11 bash[17885]: cluster 2026-03-09T14:32:10.977427+0000 mgr.y (mgr.14152) 130 : cluster [DBG] pgmap v112: 1 pgs: 1 remapped+peering; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:12.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:12 vm11 bash[17885]: cephadm 2026-03-09T14:32:11.005002+0000 mgr.y (mgr.14152) 131 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:32:12.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:12 vm11 bash[17885]: cephadm 2026-03-09T14:32:11.014456+0000 mgr.y (mgr.14152) 132 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-09T14:32:12.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:12 vm11 bash[17885]: cephadm 2026-03-09T14:32:11.015071+0000 mgr.y (mgr.14152) 133 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-09T14:32:12.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:12 vm11 bash[17885]: cluster 2026-03-09T14:32:11.300186+0000 mon.a (mon.0) 550 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:32:12.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:12 vm11 bash[17885]: cluster 2026-03-09T14:32:11.605634+0000 mon.a (mon.0) 551 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-09T14:32:13.088 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:32:13.088 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":49,"fsid":"f59f9828-1bc3-11f1-bfd8-7b3d0c866040","created":"2026-03-09T14:29:19.844551+0000","modified":"2026-03-09T14:32:11.293147+0000","last_up_change":"2026-03-09T14:32:09.277693+0000","last_in_change":"2026-03-09T14:31:55.753851+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T14:30:54.519768+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"01f1c7a2-0d56-449a-98b5-2d0134c34758","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":47,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6803","nonce":3608472040}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6805","nonce":3608472040}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6809","nonce":3608472040}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6807","nonce":3608472040}]},"public_addr":"192.168.123.107:6803/3608472040","cluster_addr":"192.168.123.107:6805/3608472040","heartbeat_back_addr":"192.168.123.107:6809/3608472040","heartbeat_front_addr":"192.168.123.107:6807/3608472040","state":["exists","up"]},{"osd":1,"uuid":"c5bcdd68-0c8f-46dc-8a25-561605efa0ff","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6811","nonce":2809750614}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6813","nonce":2809750614}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6817","nonce":2809750614}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6815","nonce":2809750614}]},"public_addr":"192.168.123.107:6811/2809750614","cluster_addr":"192.168.123.107:6813/2809750614","heartbeat_back_addr":"192.168.123.107:6817/2809750614","heartbeat_front_addr":"192.168.123.107:6815/2809750614","state":["exists","up"]},{"osd":2,"uuid":"6878f209-d828-467d-8a66-6cca096732a5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6819","nonce":2936867491}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6821","nonce":2936867491}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6825","nonce":2936867491}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6823","nonce":2936867491}]},"public_addr":"192.168.123.107:6819/2936867491","cluster_addr":"192.168.123.107:6821/2936867491","heartbeat_back_addr":"192.168.123.107:6825/2936867491","heartbeat_front_addr":"192.168.123.107:6823/2936867491","state":["exists","up"]},{"osd":3,"uuid":"afc54d82-66a7-42e1-83c1-0970428ef794","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6827","nonce":2142580280}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6829","nonce":2142580280}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6832","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6833","nonce":2142580280}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6830","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6831","nonce":2142580280}]},"public_addr":"192.168.123.107:6827/2142580280","cluster_addr":"192.168.123.107:6829/2142580280","heartbeat_back_addr":"192.168.123.107:6833/2142580280","heartbeat_front_addr":"192.168.123.107:6831/2142580280","state":["exists","up"]},{"osd":4,"uuid":"8e6cc346-4281-49a1-9886-18c25e9addfc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6800","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6801","nonce":2733246535}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6802","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6803","nonce":2733246535}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6806","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6807","nonce":2733246535}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6804","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6805","nonce":2733246535}]},"public_addr":"192.168.123.111:6801/2733246535","cluster_addr":"192.168.123.111:6803/2733246535","heartbeat_back_addr":"192.168.123.111:6807/2733246535","heartbeat_front_addr":"192.168.123.111:6805/2733246535","state":["exists","up"]},{"osd":5,"uuid":"104be397-ca1c-4a2d-ae2d-97efa37d095a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":37,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6808","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6809","nonce":122506048}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6810","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6811","nonce":122506048}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6814","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6815","nonce":122506048}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6812","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6813","nonce":122506048}]},"public_addr":"192.168.123.111:6809/122506048","cluster_addr":"192.168.123.111:6811/122506048","heartbeat_back_addr":"192.168.123.111:6815/122506048","heartbeat_front_addr":"192.168.123.111:6813/122506048","state":["exists","up"]},{"osd":6,"uuid":"77a63107-dca7-4e61-85ab-633ea82bcb7d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":42,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6816","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6817","nonce":615402579}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6818","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6819","nonce":615402579}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6822","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6823","nonce":615402579}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6820","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6821","nonce":615402579}]},"public_addr":"192.168.123.111:6817/615402579","cluster_addr":"192.168.123.111:6819/615402579","heartbeat_back_addr":"192.168.123.111:6823/615402579","heartbeat_front_addr":"192.168.123.111:6821/615402579","state":["exists","up"]},{"osd":7,"uuid":"abdf6bc5-5826-4388-bb2b-2d627c14c61b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":48,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6824","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6825","nonce":2351382271}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6826","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6827","nonce":2351382271}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6830","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6831","nonce":2351382271}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6828","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6829","nonce":2351382271}]},"public_addr":"192.168.123.111:6825/2351382271","cluster_addr":"192.168.123.111:6827/2351382271","heartbeat_back_addr":"192.168.123.111:6831/2351382271","heartbeat_front_addr":"192.168.123.111:6829/2351382271","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:30:21.765859+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:30:37.134085+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:30:52.232648+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:08.073169+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:22.394871+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:37.533651+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:51.207510+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:32:07.306062+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/2850874332":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/2030379457":"2026-03-10T14:29:42.227300+0000","192.168.123.107:6801/735153467":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/1327540493":"2026-03-10T14:29:33.188169+0000","192.168.123.107:0/1561120863":"2026-03-10T14:29:33.188169+0000","192.168.123.107:6800/735153467":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/1541928502":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/3454294218":"2026-03-10T14:29:33.188169+0000","192.168.123.107:6800/3548929186":"2026-03-10T14:29:33.188169+0000","192.168.123.107:6801/3548929186":"2026-03-10T14:29:33.188169+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:32:13.146 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T14:30:54.519768+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}}] 2026-03-09T14:32:13.146 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd pool get .mgr pg_num 2026-03-09T14:32:13.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:13 vm07 bash[22585]: audit 2026-03-09T14:32:13.083861+0000 mon.a (mon.0) 552 : audit [DBG] from='client.? 192.168.123.107:0/124236893' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:32:13.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:13 vm07 bash[17480]: audit 2026-03-09T14:32:13.083861+0000 mon.a (mon.0) 552 : audit [DBG] from='client.? 192.168.123.107:0/124236893' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:32:13.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:13 vm11 bash[17885]: audit 2026-03-09T14:32:13.083861+0000 mon.a (mon.0) 552 : audit [DBG] from='client.? 192.168.123.107:0/124236893' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:32:14.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:14 vm07 bash[22585]: cluster 2026-03-09T14:32:12.977687+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v114: 1 pgs: 1 remapped+peering; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:14.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:14 vm07 bash[17480]: cluster 2026-03-09T14:32:12.977687+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v114: 1 pgs: 1 remapped+peering; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:14.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:14 vm11 bash[17885]: cluster 2026-03-09T14:32:12.977687+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v114: 1 pgs: 1 remapped+peering; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:15.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:15 vm07 bash[22585]: cluster 2026-03-09T14:32:15.319816+0000 mon.a (mon.0) 553 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T14:32:15.667 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:15 vm07 bash[22585]: cluster 2026-03-09T14:32:15.319843+0000 mon.a (mon.0) 554 : cluster [INF] Cluster is now healthy 2026-03-09T14:32:15.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:15 vm07 bash[17480]: cluster 2026-03-09T14:32:15.319816+0000 mon.a (mon.0) 553 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T14:32:15.667 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:15 vm07 bash[17480]: cluster 2026-03-09T14:32:15.319843+0000 mon.a (mon.0) 554 : cluster [INF] Cluster is now healthy 2026-03-09T14:32:15.756 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:15.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:15 vm11 bash[17885]: cluster 2026-03-09T14:32:15.319816+0000 mon.a (mon.0) 553 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-09T14:32:15.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:15 vm11 bash[17885]: cluster 2026-03-09T14:32:15.319843+0000 mon.a (mon.0) 554 : cluster [INF] Cluster is now healthy 2026-03-09T14:32:16.118 INFO:teuthology.orchestra.run.vm07.stdout:pg_num: 1 2026-03-09T14:32:16.171 INFO:tasks.cephadm:Adding prometheus.a on vm11 2026-03-09T14:32:16.171 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch apply prometheus '1;vm11=a' 2026-03-09T14:32:16.591 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled prometheus update... 2026-03-09T14:32:16.668 DEBUG:teuthology.orchestra.run.vm11:prometheus.a> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@prometheus.a.service 2026-03-09T14:32:16.669 INFO:tasks.cephadm:Adding node-exporter.a on vm07 2026-03-09T14:32:16.669 INFO:tasks.cephadm:Adding node-exporter.b on vm11 2026-03-09T14:32:16.669 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch apply node-exporter '2;vm07=a;vm11=b' 2026-03-09T14:32:16.854 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:16 vm11 bash[17885]: cluster 2026-03-09T14:32:14.977940+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:16.854 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:16 vm11 bash[17885]: audit 2026-03-09T14:32:16.113790+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.107:0/1425053390' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:32:16.854 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:16 vm11 bash[17885]: audit 2026-03-09T14:32:16.587365+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:16.854 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:16 vm11 bash[17885]: audit 2026-03-09T14:32:16.593095+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:16.854 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:16 vm11 bash[17885]: audit 2026-03-09T14:32:16.594173+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:16.854 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:16 vm11 bash[17885]: audit 2026-03-09T14:32:16.594916+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:16.854 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:16 vm11 bash[17885]: audit 2026-03-09T14:32:16.600968+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:16.854 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:16 vm11 bash[17885]: audit 2026-03-09T14:32:16.604639+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:16 vm07 bash[17480]: cluster 2026-03-09T14:32:14.977940+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:16 vm07 bash[17480]: audit 2026-03-09T14:32:16.113790+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.107:0/1425053390' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:16 vm07 bash[17480]: audit 2026-03-09T14:32:16.587365+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:16 vm07 bash[17480]: audit 2026-03-09T14:32:16.593095+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:16 vm07 bash[17480]: audit 2026-03-09T14:32:16.594173+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:16 vm07 bash[17480]: audit 2026-03-09T14:32:16.594916+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:16 vm07 bash[17480]: audit 2026-03-09T14:32:16.600968+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:16 vm07 bash[17480]: audit 2026-03-09T14:32:16.604639+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:16 vm07 bash[22585]: cluster 2026-03-09T14:32:14.977940+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:16 vm07 bash[22585]: audit 2026-03-09T14:32:16.113790+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.107:0/1425053390' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:16 vm07 bash[22585]: audit 2026-03-09T14:32:16.587365+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:16 vm07 bash[22585]: audit 2026-03-09T14:32:16.593095+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:16 vm07 bash[22585]: audit 2026-03-09T14:32:16.594173+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:16 vm07 bash[22585]: audit 2026-03-09T14:32:16.594916+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:16 vm07 bash[22585]: audit 2026-03-09T14:32:16.600968+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:16.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:16 vm07 bash[22585]: audit 2026-03-09T14:32:16.604639+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T14:32:17.145 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled node-exporter update... 2026-03-09T14:32:17.197 DEBUG:teuthology.orchestra.run.vm07:node-exporter.a> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@node-exporter.a.service 2026-03-09T14:32:17.198 DEBUG:teuthology.orchestra.run.vm11:node-exporter.b> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@node-exporter.b.service 2026-03-09T14:32:17.199 INFO:tasks.cephadm:Adding alertmanager.a on vm07 2026-03-09T14:32:17.199 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch apply alertmanager '1;vm07=a' 2026-03-09T14:32:17.737 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:17 vm11 bash[18539]: ignoring --setuser ceph since I am not root 2026-03-09T14:32:17.737 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:17 vm11 bash[18539]: ignoring --setgroup ceph since I am not root 2026-03-09T14:32:17.738 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:17 vm11 bash[17885]: audit 2026-03-09T14:32:16.578894+0000 mgr.y (mgr.14152) 136 : audit [DBG] from='client.24286 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:32:17.738 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:17 vm11 bash[17885]: cephadm 2026-03-09T14:32:16.579801+0000 mgr.y (mgr.14152) 137 : cephadm [INF] Saving service prometheus spec with placement vm11=a;count:1 2026-03-09T14:32:17.738 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:17 vm11 bash[17885]: audit 2026-03-09T14:32:17.140960+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:17.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:17 vm07 bash[17480]: audit 2026-03-09T14:32:16.578894+0000 mgr.y (mgr.14152) 136 : audit [DBG] from='client.24286 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:32:17.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:17 vm07 bash[17480]: cephadm 2026-03-09T14:32:16.579801+0000 mgr.y (mgr.14152) 137 : cephadm [INF] Saving service prometheus spec with placement vm11=a;count:1 2026-03-09T14:32:17.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:17 vm07 bash[17480]: audit 2026-03-09T14:32:17.140960+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:17.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:17 vm07 bash[22585]: audit 2026-03-09T14:32:16.578894+0000 mgr.y (mgr.14152) 136 : audit [DBG] from='client.24286 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:32:17.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:17 vm07 bash[22585]: cephadm 2026-03-09T14:32:16.579801+0000 mgr.y (mgr.14152) 137 : cephadm [INF] Saving service prometheus spec with placement vm11=a;count:1 2026-03-09T14:32:17.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:17 vm07 bash[22585]: audit 2026-03-09T14:32:17.140960+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' 2026-03-09T14:32:17.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:17 vm07 bash[17785]: ignoring --setuser ceph since I am not root 2026-03-09T14:32:17.918 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:17 vm07 bash[17785]: ignoring --setgroup ceph since I am not root 2026-03-09T14:32:17.918 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:17 vm07 bash[17785]: debug 2026-03-09T14:32:17.661+0000 7f9d4fdc0700 1 -- 192.168.123.107:0/2508655205 <== mon.1 v2:192.168.123.107:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (secure 0 0 0) 0x5640db5f0340 con 0x5640dc36cc00 2026-03-09T14:32:17.918 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:17 vm07 bash[17785]: debug 2026-03-09T14:32:17.737+0000 7f9d5881c000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:32:17.918 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:17 vm07 bash[17785]: debug 2026-03-09T14:32:17.789+0000 7f9d5881c000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:32:18.011 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:17 vm11 bash[18539]: debug 2026-03-09T14:32:17.731+0000 7f2c631cd000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:32:18.011 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:17 vm11 bash[18539]: debug 2026-03-09T14:32:17.783+0000 7f2c631cd000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:32:18.417 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:18 vm07 bash[17785]: debug 2026-03-09T14:32:18.097+0000 7f9d5881c000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:32:18.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:18 vm11 bash[18539]: debug 2026-03-09T14:32:18.091+0000 7f2c631cd000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:32:18.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:18 vm07 bash[17480]: audit 2026-03-09T14:32:17.615961+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T14:32:18.917 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:18 vm07 bash[17480]: cluster 2026-03-09T14:32:17.616010+0000 mon.a (mon.0) 563 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T14:32:18.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:18 vm07 bash[22585]: audit 2026-03-09T14:32:17.615961+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T14:32:18.917 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:18 vm07 bash[22585]: cluster 2026-03-09T14:32:17.616010+0000 mon.a (mon.0) 563 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T14:32:18.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:18 vm07 bash[17785]: debug 2026-03-09T14:32:18.617+0000 7f9d5881c000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:32:18.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:18 vm07 bash[17785]: debug 2026-03-09T14:32:18.721+0000 7f9d5881c000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:32:18.941 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:18 vm11 bash[18539]: debug 2026-03-09T14:32:18.623+0000 7f2c631cd000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:32:18.942 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:18 vm11 bash[18539]: debug 2026-03-09T14:32:18.723+0000 7f2c631cd000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:32:18.942 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:18 vm11 bash[17885]: audit 2026-03-09T14:32:17.615961+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14152 192.168.123.107:0/1878153069' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T14:32:18.942 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:18 vm11 bash[17885]: cluster 2026-03-09T14:32:17.616010+0000 mon.a (mon.0) 563 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T14:32:19.236 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:18 vm11 bash[18539]: debug 2026-03-09T14:32:18.935+0000 7f2c631cd000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:32:19.236 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:19 vm11 bash[18539]: debug 2026-03-09T14:32:19.043+0000 7f2c631cd000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:32:19.236 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:19 vm11 bash[18539]: debug 2026-03-09T14:32:19.091+0000 7f2c631cd000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:32:19.266 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:18 vm07 bash[17785]: debug 2026-03-09T14:32:18.941+0000 7f9d5881c000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:32:19.266 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:19 vm07 bash[17785]: debug 2026-03-09T14:32:19.049+0000 7f9d5881c000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:32:19.266 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:19 vm07 bash[17785]: debug 2026-03-09T14:32:19.105+0000 7f9d5881c000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:32:19.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:19 vm11 bash[18539]: debug 2026-03-09T14:32:19.231+0000 7f2c631cd000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:32:19.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:19 vm11 bash[18539]: debug 2026-03-09T14:32:19.295+0000 7f2c631cd000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:32:19.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:19 vm11 bash[18539]: debug 2026-03-09T14:32:19.371+0000 7f2c631cd000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:32:19.667 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:19 vm07 bash[17785]: debug 2026-03-09T14:32:19.257+0000 7f9d5881c000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:32:19.667 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:19 vm07 bash[17785]: debug 2026-03-09T14:32:19.321+0000 7f9d5881c000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:32:19.667 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:19 vm07 bash[17785]: debug 2026-03-09T14:32:19.393+0000 7f9d5881c000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:32:20.260 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:19 vm11 bash[18539]: debug 2026-03-09T14:32:19.923+0000 7f2c631cd000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:32:20.260 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:19 vm11 bash[18539]: debug 2026-03-09T14:32:19.991+0000 7f2c631cd000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:32:20.260 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:20 vm11 bash[18539]: debug 2026-03-09T14:32:20.051+0000 7f2c631cd000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:32:20.416 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:19 vm07 bash[17785]: debug 2026-03-09T14:32:19.953+0000 7f9d5881c000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:32:20.416 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:20 vm07 bash[17785]: debug 2026-03-09T14:32:20.013+0000 7f9d5881c000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:32:20.416 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:20 vm07 bash[17785]: debug 2026-03-09T14:32:20.077+0000 7f9d5881c000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:32:20.760 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:20 vm11 bash[18539]: debug 2026-03-09T14:32:20.383+0000 7f2c631cd000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:32:20.760 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:20 vm11 bash[18539]: debug 2026-03-09T14:32:20.447+0000 7f2c631cd000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:32:20.760 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:20 vm11 bash[18539]: debug 2026-03-09T14:32:20.515+0000 7f2c631cd000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:32:20.760 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:20 vm11 bash[18539]: debug 2026-03-09T14:32:20.599+0000 7f2c631cd000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:32:20.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:20 vm07 bash[17785]: debug 2026-03-09T14:32:20.425+0000 7f9d5881c000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:32:20.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:20 vm07 bash[17785]: debug 2026-03-09T14:32:20.497+0000 7f9d5881c000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:32:20.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:20 vm07 bash[17785]: debug 2026-03-09T14:32:20.561+0000 7f9d5881c000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:32:20.917 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:20 vm07 bash[17785]: debug 2026-03-09T14:32:20.654+0000 7f9d5881c000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:32:21.203 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:20 vm11 bash[18539]: debug 2026-03-09T14:32:20.935+0000 7f2c631cd000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:32:21.203 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:21 vm11 bash[18539]: debug 2026-03-09T14:32:21.143+0000 7f2c631cd000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:32:21.258 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:20 vm07 bash[17785]: debug 2026-03-09T14:32:20.986+0000 7f9d5881c000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:32:21.258 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:21 vm07 bash[17785]: debug 2026-03-09T14:32:21.190+0000 7f9d5881c000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:32:21.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:21 vm11 bash[18539]: debug 2026-03-09T14:32:21.199+0000 7f2c631cd000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:32:21.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:21 vm11 bash[18539]: debug 2026-03-09T14:32:21.263+0000 7f2c631cd000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:32:21.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:21 vm11 bash[18539]: debug 2026-03-09T14:32:21.411+0000 7f2c631cd000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:32:21.666 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:21 vm07 bash[17785]: debug 2026-03-09T14:32:21.250+0000 7f9d5881c000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:32:21.666 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:21 vm07 bash[17785]: debug 2026-03-09T14:32:21.318+0000 7f9d5881c000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:32:21.666 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:21 vm07 bash[17785]: debug 2026-03-09T14:32:21.470+0000 7f9d5881c000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:21 vm11 bash[18539]: debug 2026-03-09T14:32:21.911+0000 7f2c631cd000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:21 vm11 bash[18539]: [09/Mar/2026:14:32:21] ENGINE Bus STARTING 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:22 vm11 bash[18539]: CherryPy Checker: 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:22 vm11 bash[18539]: The Application mounted at '' has an empty config. 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:22 vm11 bash[18539]: [09/Mar/2026:14:32:22] ENGINE Serving on http://:::9283 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:22 vm11 bash[18539]: [09/Mar/2026:14:32:22] ENGINE Bus STARTED 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:21 vm11 bash[17885]: cluster 2026-03-09T14:32:21.917931+0000 mon.a (mon.0) 564 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:21 vm11 bash[17885]: cluster 2026-03-09T14:32:21.918021+0000 mon.a (mon.0) 565 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:21 vm11 bash[17885]: audit 2026-03-09T14:32:21.920728+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:21 vm11 bash[17885]: audit 2026-03-09T14:32:21.921568+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:21 vm11 bash[17885]: audit 2026-03-09T14:32:21.923117+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:32:22.260 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:21 vm11 bash[17885]: audit 2026-03-09T14:32:21.923546+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:21 vm07 bash[22585]: cluster 2026-03-09T14:32:21.917931+0000 mon.a (mon.0) 564 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:21 vm07 bash[22585]: cluster 2026-03-09T14:32:21.918021+0000 mon.a (mon.0) 565 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:21 vm07 bash[22585]: audit 2026-03-09T14:32:21.920728+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:21 vm07 bash[22585]: audit 2026-03-09T14:32:21.921568+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:21 vm07 bash[22585]: audit 2026-03-09T14:32:21.923117+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:21 vm07 bash[22585]: audit 2026-03-09T14:32:21.923546+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:21 vm07 bash[17480]: cluster 2026-03-09T14:32:21.917931+0000 mon.a (mon.0) 564 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:21 vm07 bash[17480]: cluster 2026-03-09T14:32:21.918021+0000 mon.a (mon.0) 565 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:21 vm07 bash[17480]: audit 2026-03-09T14:32:21.920728+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:21 vm07 bash[17480]: audit 2026-03-09T14:32:21.921568+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:21 vm07 bash[17480]: audit 2026-03-09T14:32:21.923117+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:21 vm07 bash[17480]: audit 2026-03-09T14:32:21.923546+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.? 192.168.123.111:0/1537797916' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:32:22.416 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:22 vm07 bash[17785]: debug 2026-03-09T14:32:22.006+0000 7f9d5881c000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:32:23.260 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:23 vm11 bash[17885]: cluster 2026-03-09T14:32:21.979941+0000 mon.a (mon.0) 566 : cluster [DBG] mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T14:32:23.260 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:23 vm11 bash[17885]: cluster 2026-03-09T14:32:22.013643+0000 mon.a (mon.0) 567 : cluster [INF] Active manager daemon y restarted 2026-03-09T14:32:23.260 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:23 vm11 bash[17885]: cluster 2026-03-09T14:32:22.014573+0000 mon.a (mon.0) 568 : cluster [INF] Activating manager daemon y 2026-03-09T14:32:23.260 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:23 vm11 bash[17885]: cluster 2026-03-09T14:32:22.019596+0000 mon.a (mon.0) 569 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:32:23.360 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:23 vm07 bash[17480]: cluster 2026-03-09T14:32:21.979941+0000 mon.a (mon.0) 566 : cluster [DBG] mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:23 vm07 bash[17480]: cluster 2026-03-09T14:32:22.013643+0000 mon.a (mon.0) 567 : cluster [INF] Active manager daemon y restarted 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:23 vm07 bash[17480]: cluster 2026-03-09T14:32:22.014573+0000 mon.a (mon.0) 568 : cluster [INF] Activating manager daemon y 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:23 vm07 bash[17480]: cluster 2026-03-09T14:32:22.019596+0000 mon.a (mon.0) 569 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:23 vm07 bash[17785]: [09/Mar/2026:14:32:23] ENGINE Bus STARTING 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:23 vm07 bash[17785]: [09/Mar/2026:14:32:23] ENGINE Bus STARTING 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:23 vm07 bash[17785]: CherryPy Checker: 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:23 vm07 bash[17785]: The Application mounted at '' has an empty config. 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:23 vm07 bash[17785]: [09/Mar/2026:14:32:23] ENGINE Serving on http://:::9283 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:23 vm07 bash[17785]: [09/Mar/2026:14:32:23] ENGINE Bus STARTED 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:23 vm07 bash[17785]: [09/Mar/2026:14:32:23] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:23 vm07 bash[17785]: [09/Mar/2026:14:32:23] ENGINE Bus STARTED 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:23 vm07 bash[22585]: cluster 2026-03-09T14:32:21.979941+0000 mon.a (mon.0) 566 : cluster [DBG] mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:23 vm07 bash[22585]: cluster 2026-03-09T14:32:22.013643+0000 mon.a (mon.0) 567 : cluster [INF] Active manager daemon y restarted 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:23 vm07 bash[22585]: cluster 2026-03-09T14:32:22.014573+0000 mon.a (mon.0) 568 : cluster [INF] Activating manager daemon y 2026-03-09T14:32:23.361 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:23 vm07 bash[22585]: cluster 2026-03-09T14:32:22.019596+0000 mon.a (mon.0) 569 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:32:24.059 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled alertmanager update... 2026-03-09T14:32:24.115 DEBUG:teuthology.orchestra.run.vm07:alertmanager.a> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@alertmanager.a.service 2026-03-09T14:32:24.116 INFO:tasks.cephadm:Adding grafana.a on vm11 2026-03-09T14:32:24.116 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph orch apply grafana '1;vm11=a' 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: cluster 2026-03-09T14:32:22.997975+0000 mon.a (mon.0) 570 : cluster [DBG] mgrmap e18: y(active, starting, since 0.983487s), standbys: x 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.000443+0000 mon.b (mon.2) 25 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.000749+0000 mon.b (mon.2) 26 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.001079+0000 mon.b (mon.2) 27 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.001328+0000 mon.b (mon.2) 28 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.011839+0000 mon.b (mon.2) 29 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.012206+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.012573+0000 mon.b (mon.2) 31 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.012904+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.013250+0000 mon.b (mon.2) 33 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.013582+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.013910+0000 mon.b (mon.2) 35 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.014257+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.014588+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.015033+0000 mon.b (mon.2) 38 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.015364+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.015856+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: cluster 2026-03-09T14:32:23.022287+0000 mon.a (mon.0) 571 : cluster [INF] Manager daemon y is now available 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.041161+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.043211+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.047450+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.050252+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.053605+0000 mon.b (mon.2) 44 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.077781+0000 mon.b (mon.2) 45 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.078357+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.099544+0000 mon.b (mon.2) 46 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.100184+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: cephadm 2026-03-09T14:32:23.203659+0000 mgr.y (mgr.24310) 1 : cephadm [INF] [09/Mar/2026:14:32:23] ENGINE Bus STARTING 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: cephadm 2026-03-09T14:32:23.357299+0000 mgr.y (mgr.24310) 2 : cephadm [INF] [09/Mar/2026:14:32:23] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: cephadm 2026-03-09T14:32:23.357596+0000 mgr.y (mgr.24310) 3 : cephadm [INF] [09/Mar/2026:14:32:23] ENGINE Bus STARTED 2026-03-09T14:32:24.282 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:24 vm11 bash[17885]: audit 2026-03-09T14:32:23.367073+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:24.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: cluster 2026-03-09T14:32:22.997975+0000 mon.a (mon.0) 570 : cluster [DBG] mgrmap e18: y(active, starting, since 0.983487s), standbys: x 2026-03-09T14:32:24.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.000443+0000 mon.b (mon.2) 25 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:32:24.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.000749+0000 mon.b (mon.2) 26 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:32:24.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.001079+0000 mon.b (mon.2) 27 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.001328+0000 mon.b (mon.2) 28 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.011839+0000 mon.b (mon.2) 29 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.012206+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.012573+0000 mon.b (mon.2) 31 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.012904+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.013250+0000 mon.b (mon.2) 33 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.013582+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.013910+0000 mon.b (mon.2) 35 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.014257+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.014588+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.015033+0000 mon.b (mon.2) 38 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.015364+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.015856+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: cluster 2026-03-09T14:32:23.022287+0000 mon.a (mon.0) 571 : cluster [INF] Manager daemon y is now available 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.041161+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.043211+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.047450+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.050252+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.053605+0000 mon.b (mon.2) 44 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.077781+0000 mon.b (mon.2) 45 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.078357+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.099544+0000 mon.b (mon.2) 46 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.100184+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: cephadm 2026-03-09T14:32:23.203659+0000 mgr.y (mgr.24310) 1 : cephadm [INF] [09/Mar/2026:14:32:23] ENGINE Bus STARTING 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: cephadm 2026-03-09T14:32:23.357299+0000 mgr.y (mgr.24310) 2 : cephadm [INF] [09/Mar/2026:14:32:23] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: cephadm 2026-03-09T14:32:23.357596+0000 mgr.y (mgr.24310) 3 : cephadm [INF] [09/Mar/2026:14:32:23] ENGINE Bus STARTED 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:24 vm07 bash[22585]: audit 2026-03-09T14:32:23.367073+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: cluster 2026-03-09T14:32:22.997975+0000 mon.a (mon.0) 570 : cluster [DBG] mgrmap e18: y(active, starting, since 0.983487s), standbys: x 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.000443+0000 mon.b (mon.2) 25 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.000749+0000 mon.b (mon.2) 26 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.001079+0000 mon.b (mon.2) 27 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.001328+0000 mon.b (mon.2) 28 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:32:24.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.011839+0000 mon.b (mon.2) 29 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.012206+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.012573+0000 mon.b (mon.2) 31 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.012904+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.013250+0000 mon.b (mon.2) 33 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.013582+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.013910+0000 mon.b (mon.2) 35 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.014257+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.014588+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.015033+0000 mon.b (mon.2) 38 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.015364+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.015856+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: cluster 2026-03-09T14:32:23.022287+0000 mon.a (mon.0) 571 : cluster [INF] Manager daemon y is now available 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.041161+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.043211+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.047450+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.050252+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.053605+0000 mon.b (mon.2) 44 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.077781+0000 mon.b (mon.2) 45 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.078357+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.099544+0000 mon.b (mon.2) 46 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.100184+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: cephadm 2026-03-09T14:32:23.203659+0000 mgr.y (mgr.24310) 1 : cephadm [INF] [09/Mar/2026:14:32:23] ENGINE Bus STARTING 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: cephadm 2026-03-09T14:32:23.357299+0000 mgr.y (mgr.24310) 2 : cephadm [INF] [09/Mar/2026:14:32:23] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: cephadm 2026-03-09T14:32:23.357596+0000 mgr.y (mgr.24310) 3 : cephadm [INF] [09/Mar/2026:14:32:23] ENGINE Bus STARTED 2026-03-09T14:32:24.418 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:24 vm07 bash[17480]: audit 2026-03-09T14:32:23.367073+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:24.581 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled grafana update... 2026-03-09T14:32:24.631 DEBUG:teuthology.orchestra.run.vm11:grafana.a> sudo journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@grafana.a.service 2026-03-09T14:32:24.632 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T14:32:24.632 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T14:32:25.111 INFO:teuthology.orchestra.run.vm07.stdout:[client.0] 2026-03-09T14:32:25.111 INFO:teuthology.orchestra.run.vm07.stdout: key = AQD52a5pC1UzBhAApd4Q524vO0DBhrjon879/A== 2026-03-09T14:32:25.170 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-09T14:32:25.170 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T14:32:25.170 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T14:32:25.182 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T14:32:25.323 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:25 vm11 bash[17885]: cluster 2026-03-09T14:32:24.021898+0000 mon.a (mon.0) 576 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T14:32:25.323 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:25 vm11 bash[17885]: audit 2026-03-09T14:32:24.024097+0000 mgr.y (mgr.24310) 4 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:32:25.323 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:25 vm11 bash[17885]: cephadm 2026-03-09T14:32:24.027144+0000 mgr.y (mgr.24310) 5 : cephadm [INF] Saving service alertmanager spec with placement vm07=a;count:1 2026-03-09T14:32:25.323 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:25 vm11 bash[17885]: cluster 2026-03-09T14:32:24.044873+0000 mgr.y (mgr.24310) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:25.323 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:25 vm11 bash[17885]: audit 2026-03-09T14:32:24.051893+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:25.323 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:25 vm11 bash[17885]: audit 2026-03-09T14:32:24.568492+0000 mgr.y (mgr.24310) 7 : audit [DBG] from='client.24338 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:32:25.323 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:25 vm11 bash[17885]: cephadm 2026-03-09T14:32:24.569517+0000 mgr.y (mgr.24310) 8 : cephadm [INF] Saving service grafana spec with placement vm11=a;count:1 2026-03-09T14:32:25.323 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:25 vm11 bash[17885]: audit 2026-03-09T14:32:24.577471+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:25.323 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:25 vm11 bash[17885]: cluster 2026-03-09T14:32:25.023721+0000 mon.a (mon.0) 579 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-09T14:32:25.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:25 vm07 bash[17480]: cluster 2026-03-09T14:32:24.021898+0000 mon.a (mon.0) 576 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T14:32:25.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:25 vm07 bash[17480]: audit 2026-03-09T14:32:24.024097+0000 mgr.y (mgr.24310) 4 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:32:25.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:25 vm07 bash[17480]: cephadm 2026-03-09T14:32:24.027144+0000 mgr.y (mgr.24310) 5 : cephadm [INF] Saving service alertmanager spec with placement vm07=a;count:1 2026-03-09T14:32:25.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:25 vm07 bash[17480]: cluster 2026-03-09T14:32:24.044873+0000 mgr.y (mgr.24310) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:25.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:25 vm07 bash[17480]: audit 2026-03-09T14:32:24.051893+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:25.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:25 vm07 bash[17480]: audit 2026-03-09T14:32:24.568492+0000 mgr.y (mgr.24310) 7 : audit [DBG] from='client.24338 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:32:25.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:25 vm07 bash[17480]: cephadm 2026-03-09T14:32:24.569517+0000 mgr.y (mgr.24310) 8 : cephadm [INF] Saving service grafana spec with placement vm11=a;count:1 2026-03-09T14:32:25.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:25 vm07 bash[17480]: audit 2026-03-09T14:32:24.577471+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:25.416 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:25 vm07 bash[17480]: cluster 2026-03-09T14:32:25.023721+0000 mon.a (mon.0) 579 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-09T14:32:25.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:25 vm07 bash[22585]: cluster 2026-03-09T14:32:24.021898+0000 mon.a (mon.0) 576 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-09T14:32:25.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:25 vm07 bash[22585]: audit 2026-03-09T14:32:24.024097+0000 mgr.y (mgr.24310) 4 : audit [DBG] from='client.24298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:32:25.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:25 vm07 bash[22585]: cephadm 2026-03-09T14:32:24.027144+0000 mgr.y (mgr.24310) 5 : cephadm [INF] Saving service alertmanager spec with placement vm07=a;count:1 2026-03-09T14:32:25.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:25 vm07 bash[22585]: cluster 2026-03-09T14:32:24.044873+0000 mgr.y (mgr.24310) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:25.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:25 vm07 bash[22585]: audit 2026-03-09T14:32:24.051893+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:25.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:25 vm07 bash[22585]: audit 2026-03-09T14:32:24.568492+0000 mgr.y (mgr.24310) 7 : audit [DBG] from='client.24338 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:32:25.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:25 vm07 bash[22585]: cephadm 2026-03-09T14:32:24.569517+0000 mgr.y (mgr.24310) 8 : cephadm [INF] Saving service grafana spec with placement vm11=a;count:1 2026-03-09T14:32:25.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:25 vm07 bash[22585]: audit 2026-03-09T14:32:24.577471+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:25.417 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:25 vm07 bash[22585]: cluster 2026-03-09T14:32:25.023721+0000 mon.a (mon.0) 579 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-09T14:32:25.672 INFO:teuthology.orchestra.run.vm11.stdout:[client.1] 2026-03-09T14:32:25.672 INFO:teuthology.orchestra.run.vm11.stdout: key = AQD52a5peEiWJxAAshsioUx17mmuyac/cqa7Jg== 2026-03-09T14:32:25.720 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-09T14:32:25.720 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T14:32:25.720 DEBUG:teuthology.orchestra.run.vm11:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T14:32:25.732 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T14:32:25.732 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T14:32:25.732 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph mgr dump --format=json 2026-03-09T14:32:26.043 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:26 vm07 bash[17480]: cluster 2026-03-09T14:32:25.003718+0000 mgr.y (mgr.24310) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:26.043 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:26 vm07 bash[17480]: audit 2026-03-09T14:32:25.103846+0000 mon.a (mon.0) 580 : audit [INF] from='client.? 192.168.123.107:0/1635796782' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:32:26.043 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:26 vm07 bash[17480]: audit 2026-03-09T14:32:25.108631+0000 mon.a (mon.0) 581 : audit [INF] from='client.? 192.168.123.107:0/1635796782' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:32:26.043 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:26 vm07 bash[17480]: audit 2026-03-09T14:32:25.662986+0000 mon.b (mon.2) 47 : audit [INF] from='client.? 192.168.123.111:0/1723543585' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:32:26.043 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:26 vm07 bash[17480]: audit 2026-03-09T14:32:25.664032+0000 mon.a (mon.0) 582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:32:26.043 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:26 vm07 bash[17480]: audit 2026-03-09T14:32:25.667782+0000 mon.a (mon.0) 583 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:32:26.298 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:26 vm07 bash[22585]: cluster 2026-03-09T14:32:25.003718+0000 mgr.y (mgr.24310) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:26.298 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:26 vm07 bash[22585]: audit 2026-03-09T14:32:25.103846+0000 mon.a (mon.0) 580 : audit [INF] from='client.? 192.168.123.107:0/1635796782' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:32:26.298 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:26 vm07 bash[22585]: audit 2026-03-09T14:32:25.108631+0000 mon.a (mon.0) 581 : audit [INF] from='client.? 192.168.123.107:0/1635796782' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:32:26.298 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:26 vm07 bash[22585]: audit 2026-03-09T14:32:25.662986+0000 mon.b (mon.2) 47 : audit [INF] from='client.? 192.168.123.111:0/1723543585' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:32:26.298 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:26 vm07 bash[22585]: audit 2026-03-09T14:32:25.664032+0000 mon.a (mon.0) 582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:32:26.298 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:26 vm07 bash[22585]: audit 2026-03-09T14:32:25.667782+0000 mon.a (mon.0) 583 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:32:26.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:26 vm11 bash[17885]: cluster 2026-03-09T14:32:25.003718+0000 mgr.y (mgr.24310) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:26.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:26 vm11 bash[17885]: audit 2026-03-09T14:32:25.103846+0000 mon.a (mon.0) 580 : audit [INF] from='client.? 192.168.123.107:0/1635796782' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:32:26.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:26 vm11 bash[17885]: audit 2026-03-09T14:32:25.108631+0000 mon.a (mon.0) 581 : audit [INF] from='client.? 192.168.123.107:0/1635796782' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:32:26.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:26 vm11 bash[17885]: audit 2026-03-09T14:32:25.662986+0000 mon.b (mon.2) 47 : audit [INF] from='client.? 192.168.123.111:0/1723543585' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:32:26.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:26 vm11 bash[17885]: audit 2026-03-09T14:32:25.664032+0000 mon.a (mon.0) 582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:32:26.410 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:26 vm11 bash[17885]: audit 2026-03-09T14:32:25.667782+0000 mon.a (mon.0) 583 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:32:27.127 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.127 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.127 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.127 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.127 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.128 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.128 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.128 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.415 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.416 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.416 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.280535+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.417811+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.551507+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.553846+0000 mon.b (mon.2) 48 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.554353+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: cephadm 2026-03-09T14:32:26.554923+0000 mgr.y (mgr.24310) 10 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: cephadm 2026-03-09T14:32:26.616749+0000 mgr.y (mgr.24310) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.683951+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.692930+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.693917+0000 mon.b (mon.2) 49 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.694424+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.694892+0000 mon.b (mon.2) 50 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.695290+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.695742+0000 mon.b (mon.2) 51 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.696118+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.696550+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.696970+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.817188+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.416 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:27 vm07 bash[22585]: audit 2026-03-09T14:32:26.822760+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.416 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.280535+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.417811+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.551507+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.553846+0000 mon.b (mon.2) 48 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.554353+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: cephadm 2026-03-09T14:32:26.554923+0000 mgr.y (mgr.24310) 10 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: cephadm 2026-03-09T14:32:26.616749+0000 mgr.y (mgr.24310) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.683951+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.692930+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.693917+0000 mon.b (mon.2) 49 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.694424+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.694892+0000 mon.b (mon.2) 50 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.695290+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.695742+0000 mon.b (mon.2) 51 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.696118+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.696550+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.696970+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.817188+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[17480]: audit 2026-03-09T14:32:26.822760+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.417 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.417 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.417 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.417 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.417 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:27 vm07 systemd[1]: Started Ceph node-exporter.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:32:27.417 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:27 vm07 bash[37537]: Unable to find image 'quay.io/prometheus/node-exporter:v1.3.1' locally 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.280535+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.417811+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.551507+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.553846+0000 mon.b (mon.2) 48 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.554353+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: cephadm 2026-03-09T14:32:26.554923+0000 mgr.y (mgr.24310) 10 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: cephadm 2026-03-09T14:32:26.616749+0000 mgr.y (mgr.24310) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.683951+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.692930+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.693917+0000 mon.b (mon.2) 49 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.694424+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.694892+0000 mon.b (mon.2) 50 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.695290+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.695742+0000 mon.b (mon.2) 51 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.696118+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.696550+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.696970+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.817188+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.584 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[17885]: audit 2026-03-09T14:32:26.822760+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:27.867 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.867 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.867 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.867 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.868 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: Started Ceph node-exporter.b for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:32:27.868 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.868 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.868 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.868 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.868 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.868 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.868 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:27.868 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:32:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:28.077 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:28.260 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:28 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:28.260 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:27 vm11 bash[32718]: Unable to find image 'quay.io/prometheus/node-exporter:v1.3.1' locally 2026-03-09T14:32:28.456 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:32:28.511 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":20,"active_gid":24310,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":1071847988},{"type":"v1","addr":"192.168.123.107:6801","nonce":1071847988}]},"active_addr":"192.168.123.107:6801/1071847988","active_change":"2026-03-09T14:32:22.014471+0000","active_mgr_features":4540138303579357183,"available":true,"standbys":[{"gid":24323,"name":"x","mgr_features":4540138303579357183,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2400","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"7","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.23.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:8.3.5","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"docker.io/library/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"docker.io/arcts/keepalived","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.3.1","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.33.4","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"noautoscale":{"name":"noautoscale","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"global autoscale flag","long_desc":"Option to turn on/off the autoscaler for all pools","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2400","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"7","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.23.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:8.3.5","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"docker.io/library/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"docker.io/arcts/keepalived","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.3.1","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.33.4","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"noautoscale":{"name":"noautoscale","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"global autoscale flag","long_desc":"Option to turn on/off the autoscaler for all pools","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.107:8443/","prometheus":"http://192.168.123.107:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"last_failure_osd_epoch":50,"active_clients":[{"addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":1561618492}]},{"addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":4000554118}]},{"addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":13006971}]},{"addrvec":[{"type":"v2","addr":"192.168.123.107:0","nonce":1282393936}]}]}} 2026-03-09T14:32:28.513 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T14:32:28.513 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T14:32:28.513 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd dump --format=json 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[17480]: cephadm 2026-03-09T14:32:26.697463+0000 mgr.y (mgr.24310) 12 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[17480]: cephadm 2026-03-09T14:32:26.698239+0000 mgr.y (mgr.24310) 13 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[17480]: cephadm 2026-03-09T14:32:26.698302+0000 mgr.y (mgr.24310) 14 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[17480]: cephadm 2026-03-09T14:32:26.753598+0000 mgr.y (mgr.24310) 15 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[17480]: cephadm 2026-03-09T14:32:26.825005+0000 mgr.y (mgr.24310) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[17480]: cluster 2026-03-09T14:32:27.004031+0000 mgr.y (mgr.24310) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[17480]: audit 2026-03-09T14:32:27.346508+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[17480]: cephadm 2026-03-09T14:32:27.350281+0000 mgr.y (mgr.24310) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[17480]: audit 2026-03-09T14:32:27.889606+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[17480]: audit 2026-03-09T14:32:28.080753+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:28 vm07 bash[22585]: cephadm 2026-03-09T14:32:26.697463+0000 mgr.y (mgr.24310) 12 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:28 vm07 bash[22585]: cephadm 2026-03-09T14:32:26.698239+0000 mgr.y (mgr.24310) 13 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:28 vm07 bash[22585]: cephadm 2026-03-09T14:32:26.698302+0000 mgr.y (mgr.24310) 14 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:28 vm07 bash[22585]: cephadm 2026-03-09T14:32:26.753598+0000 mgr.y (mgr.24310) 15 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:28 vm07 bash[22585]: cephadm 2026-03-09T14:32:26.825005+0000 mgr.y (mgr.24310) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:28 vm07 bash[22585]: cluster 2026-03-09T14:32:27.004031+0000 mgr.y (mgr.24310) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:28 vm07 bash[22585]: audit 2026-03-09T14:32:27.346508+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:28 vm07 bash[22585]: cephadm 2026-03-09T14:32:27.350281+0000 mgr.y (mgr.24310) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:28 vm07 bash[22585]: audit 2026-03-09T14:32:27.889606+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:28.666 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:28 vm07 bash[22585]: audit 2026-03-09T14:32:28.080753+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:28.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:28 vm11 bash[17885]: cephadm 2026-03-09T14:32:26.697463+0000 mgr.y (mgr.24310) 12 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-09T14:32:28.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:28 vm11 bash[17885]: cephadm 2026-03-09T14:32:26.698239+0000 mgr.y (mgr.24310) 13 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119479705: error parsing value: Value '119479705' is below minimum 939524096 2026-03-09T14:32:28.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:28 vm11 bash[17885]: cephadm 2026-03-09T14:32:26.698302+0000 mgr.y (mgr.24310) 14 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:32:28.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:28 vm11 bash[17885]: cephadm 2026-03-09T14:32:26.753598+0000 mgr.y (mgr.24310) 15 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:32:28.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:28 vm11 bash[17885]: cephadm 2026-03-09T14:32:26.825005+0000 mgr.y (mgr.24310) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T14:32:28.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:28 vm11 bash[17885]: cluster 2026-03-09T14:32:27.004031+0000 mgr.y (mgr.24310) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:28.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:28 vm11 bash[17885]: audit 2026-03-09T14:32:27.346508+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:28.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:28 vm11 bash[17885]: cephadm 2026-03-09T14:32:27.350281+0000 mgr.y (mgr.24310) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-09T14:32:28.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:28 vm11 bash[17885]: audit 2026-03-09T14:32:27.889606+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:28.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:28 vm11 bash[17885]: audit 2026-03-09T14:32:28.080753+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:28.915 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:28 vm07 bash[37537]: v1.3.1: Pulling from prometheus/node-exporter 2026-03-09T14:32:29.357 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:29 vm07 bash[37537]: aa2a8d90b84c: Pulling fs layer 2026-03-09T14:32:29.357 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:29 vm07 bash[37537]: b45d31ee2d7f: Pulling fs layer 2026-03-09T14:32:29.357 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:29 vm07 bash[37537]: b5db1e299295: Pulling fs layer 2026-03-09T14:32:29.509 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:29 vm11 bash[32718]: v1.3.1: Pulling from prometheus/node-exporter 2026-03-09T14:32:29.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:29 vm11 bash[17885]: cephadm 2026-03-09T14:32:27.900850+0000 mgr.y (mgr.24310) 19 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-09T14:32:29.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:29 vm11 bash[17885]: audit 2026-03-09T14:32:28.451979+0000 mon.c (mon.1) 18 : audit [DBG] from='client.? 192.168.123.107:0/4201047608' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:32:29.665 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:29 vm07 bash[22585]: cephadm 2026-03-09T14:32:27.900850+0000 mgr.y (mgr.24310) 19 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-09T14:32:29.665 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:29 vm07 bash[22585]: audit 2026-03-09T14:32:28.451979+0000 mon.c (mon.1) 18 : audit [DBG] from='client.? 192.168.123.107:0/4201047608' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:32:29.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:29 vm07 bash[17480]: cephadm 2026-03-09T14:32:27.900850+0000 mgr.y (mgr.24310) 19 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-09T14:32:29.666 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:29 vm07 bash[17480]: audit 2026-03-09T14:32:28.451979+0000 mon.c (mon.1) 18 : audit [DBG] from='client.? 192.168.123.107:0/4201047608' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:32:30.009 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:29 vm11 bash[32718]: aa2a8d90b84c: Pulling fs layer 2026-03-09T14:32:30.009 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:29 vm11 bash[32718]: b45d31ee2d7f: Pulling fs layer 2026-03-09T14:32:30.009 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:29 vm11 bash[32718]: b5db1e299295: Pulling fs layer 2026-03-09T14:32:30.300 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: b45d31ee2d7f: Verifying Checksum 2026-03-09T14:32:30.300 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: b45d31ee2d7f: Download complete 2026-03-09T14:32:30.300 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: aa2a8d90b84c: Verifying Checksum 2026-03-09T14:32:30.300 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: aa2a8d90b84c: Download complete 2026-03-09T14:32:30.300 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: aa2a8d90b84c: Pull complete 2026-03-09T14:32:30.300 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: b5db1e299295: Verifying Checksum 2026-03-09T14:32:30.300 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: b5db1e299295: Download complete 2026-03-09T14:32:30.300 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: b45d31ee2d7f: Pull complete 2026-03-09T14:32:30.300 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: b5db1e299295: Pull complete 2026-03-09T14:32:30.373 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: b45d31ee2d7f: Verifying Checksum 2026-03-09T14:32:30.374 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: b45d31ee2d7f: Download complete 2026-03-09T14:32:30.374 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: aa2a8d90b84c: Verifying Checksum 2026-03-09T14:32:30.374 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: aa2a8d90b84c: Download complete 2026-03-09T14:32:30.374 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: aa2a8d90b84c: Pull complete 2026-03-09T14:32:30.374 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: b5db1e299295: Verifying Checksum 2026-03-09T14:32:30.374 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: b5db1e299295: Download complete 2026-03-09T14:32:30.665 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:30 vm07 bash[22585]: cluster 2026-03-09T14:32:29.004376+0000 mgr.y (mgr.24310) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:30.665 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[17480]: cluster 2026-03-09T14:32:29.004376+0000 mgr.y (mgr.24310) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: Digest: sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.3.1 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.3.1, branch=HEAD, revision=a2321e7b940ddcff26873612bccdf7cd4c42b6b6)" 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.17.3, user=root@243aafa5525c, date=20211205-11:09:49)" 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/) 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:108 level=info msg="Enabled collectors" 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=arp 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=bcache 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=bonding 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=btrfs 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=conntrack 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=cpu 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=cpufreq 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=diskstats 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=dmi 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=edac 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=entropy 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=fibrechannel 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=filefd 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=filesystem 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=hwmon 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=infiniband 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=ipvs 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=loadavg 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=mdadm 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=meminfo 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=netclass 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=netdev 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=netstat 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=nfs 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=nfsd 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=nvme 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=os 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=powersupplyclass 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=pressure 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=rapl 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=schedstat 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=sockstat 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=softnet 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=stat 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=tapestats 2026-03-09T14:32:30.666 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=textfile 2026-03-09T14:32:30.667 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=thermal_zone 2026-03-09T14:32:30.667 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=time 2026-03-09T14:32:30.667 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=udp_queues 2026-03-09T14:32:30.667 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=uname 2026-03-09T14:32:30.667 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=vmstat 2026-03-09T14:32:30.667 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=xfs 2026-03-09T14:32:30.667 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:115 level=info collector=zfs 2026-03-09T14:32:30.667 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.431Z caller=node_exporter.go:199 level=info msg="Listening on" address=:9100 2026-03-09T14:32:30.667 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:30 vm07 bash[37537]: ts=2026-03-09T14:32:30.432Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false 2026-03-09T14:32:30.759 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: b45d31ee2d7f: Pull complete 2026-03-09T14:32:30.759 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: b5db1e299295: Pull complete 2026-03-09T14:32:30.759 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: Digest: sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd 2026-03-09T14:32:30.759 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.3.1 2026-03-09T14:32:30.759 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.602Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.3.1, branch=HEAD, revision=a2321e7b940ddcff26873612bccdf7cd4c42b6b6)" 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.602Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.17.3, user=root@243aafa5525c, date=20211205-11:09:49)" 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.602Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/) 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.602Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:108 level=info msg="Enabled collectors" 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=arp 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=bcache 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=bonding 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=btrfs 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=conntrack 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=cpu 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=cpufreq 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=diskstats 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=dmi 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=edac 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=entropy 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=fibrechannel 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=filefd 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=filesystem 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=hwmon 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=infiniband 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=ipvs 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=loadavg 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=mdadm 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=meminfo 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=netclass 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=netdev 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=netstat 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=nfs 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=nfsd 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=nvme 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=os 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=powersupplyclass 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=pressure 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=rapl 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=schedstat 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.603Z caller=node_exporter.go:115 level=info collector=sockstat 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=softnet 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=stat 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=tapestats 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=textfile 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=thermal_zone 2026-03-09T14:32:30.760 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=time 2026-03-09T14:32:30.761 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=udp_queues 2026-03-09T14:32:30.761 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=uname 2026-03-09T14:32:30.761 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=vmstat 2026-03-09T14:32:30.761 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=xfs 2026-03-09T14:32:30.761 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:115 level=info collector=zfs 2026-03-09T14:32:30.761 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=node_exporter.go:199 level=info msg="Listening on" address=:9100 2026-03-09T14:32:30.761 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[32718]: ts=2026-03-09T14:32:30.604Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false 2026-03-09T14:32:30.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:30 vm11 bash[17885]: cluster 2026-03-09T14:32:29.004376+0000 mgr.y (mgr.24310) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:32.127 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:32.476 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:32:32.477 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":50,"fsid":"f59f9828-1bc3-11f1-bfd8-7b3d0c866040","created":"2026-03-09T14:29:19.844551+0000","modified":"2026-03-09T14:32:22.013680+0000","last_up_change":"2026-03-09T14:32:09.277693+0000","last_in_change":"2026-03-09T14:31:55.753851+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T14:30:54.519768+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"01f1c7a2-0d56-449a-98b5-2d0134c34758","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":47,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6803","nonce":3608472040}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6805","nonce":3608472040}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6809","nonce":3608472040}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6807","nonce":3608472040}]},"public_addr":"192.168.123.107:6803/3608472040","cluster_addr":"192.168.123.107:6805/3608472040","heartbeat_back_addr":"192.168.123.107:6809/3608472040","heartbeat_front_addr":"192.168.123.107:6807/3608472040","state":["exists","up"]},{"osd":1,"uuid":"c5bcdd68-0c8f-46dc-8a25-561605efa0ff","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6811","nonce":2809750614}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6813","nonce":2809750614}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6817","nonce":2809750614}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6815","nonce":2809750614}]},"public_addr":"192.168.123.107:6811/2809750614","cluster_addr":"192.168.123.107:6813/2809750614","heartbeat_back_addr":"192.168.123.107:6817/2809750614","heartbeat_front_addr":"192.168.123.107:6815/2809750614","state":["exists","up"]},{"osd":2,"uuid":"6878f209-d828-467d-8a66-6cca096732a5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6819","nonce":2936867491}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6821","nonce":2936867491}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6825","nonce":2936867491}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6823","nonce":2936867491}]},"public_addr":"192.168.123.107:6819/2936867491","cluster_addr":"192.168.123.107:6821/2936867491","heartbeat_back_addr":"192.168.123.107:6825/2936867491","heartbeat_front_addr":"192.168.123.107:6823/2936867491","state":["exists","up"]},{"osd":3,"uuid":"afc54d82-66a7-42e1-83c1-0970428ef794","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6827","nonce":2142580280}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6829","nonce":2142580280}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6832","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6833","nonce":2142580280}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6830","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6831","nonce":2142580280}]},"public_addr":"192.168.123.107:6827/2142580280","cluster_addr":"192.168.123.107:6829/2142580280","heartbeat_back_addr":"192.168.123.107:6833/2142580280","heartbeat_front_addr":"192.168.123.107:6831/2142580280","state":["exists","up"]},{"osd":4,"uuid":"8e6cc346-4281-49a1-9886-18c25e9addfc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6800","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6801","nonce":2733246535}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6802","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6803","nonce":2733246535}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6806","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6807","nonce":2733246535}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6804","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6805","nonce":2733246535}]},"public_addr":"192.168.123.111:6801/2733246535","cluster_addr":"192.168.123.111:6803/2733246535","heartbeat_back_addr":"192.168.123.111:6807/2733246535","heartbeat_front_addr":"192.168.123.111:6805/2733246535","state":["exists","up"]},{"osd":5,"uuid":"104be397-ca1c-4a2d-ae2d-97efa37d095a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":37,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6808","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6809","nonce":122506048}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6810","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6811","nonce":122506048}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6814","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6815","nonce":122506048}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6812","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6813","nonce":122506048}]},"public_addr":"192.168.123.111:6809/122506048","cluster_addr":"192.168.123.111:6811/122506048","heartbeat_back_addr":"192.168.123.111:6815/122506048","heartbeat_front_addr":"192.168.123.111:6813/122506048","state":["exists","up"]},{"osd":6,"uuid":"77a63107-dca7-4e61-85ab-633ea82bcb7d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":42,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6816","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6817","nonce":615402579}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6818","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6819","nonce":615402579}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6822","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6823","nonce":615402579}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6820","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6821","nonce":615402579}]},"public_addr":"192.168.123.111:6817/615402579","cluster_addr":"192.168.123.111:6819/615402579","heartbeat_back_addr":"192.168.123.111:6823/615402579","heartbeat_front_addr":"192.168.123.111:6821/615402579","state":["exists","up"]},{"osd":7,"uuid":"abdf6bc5-5826-4388-bb2b-2d627c14c61b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":48,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6824","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6825","nonce":2351382271}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6826","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6827","nonce":2351382271}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6830","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6831","nonce":2351382271}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6828","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6829","nonce":2351382271}]},"public_addr":"192.168.123.111:6825/2351382271","cluster_addr":"192.168.123.111:6827/2351382271","heartbeat_back_addr":"192.168.123.111:6831/2351382271","heartbeat_front_addr":"192.168.123.111:6829/2351382271","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:30:21.765859+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:30:37.134085+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:30:52.232648+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:08.073169+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:22.394871+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:37.533651+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:51.207510+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:32:07.306062+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/142611623":"2026-03-10T14:32:22.013656+0000","192.168.123.107:6801/3123223907":"2026-03-10T14:32:22.013656+0000","192.168.123.107:6800/3123223907":"2026-03-10T14:32:22.013656+0000","192.168.123.107:0/1711405865":"2026-03-10T14:32:22.013656+0000","192.168.123.107:0/2850874332":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/1354642186":"2026-03-10T14:32:22.013656+0000","192.168.123.107:0/2030379457":"2026-03-10T14:29:42.227300+0000","192.168.123.107:6801/735153467":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/2859869932":"2026-03-10T14:32:22.013656+0000","192.168.123.107:0/1327540493":"2026-03-10T14:29:33.188169+0000","192.168.123.107:0/1561120863":"2026-03-10T14:29:33.188169+0000","192.168.123.107:6800/735153467":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/1541928502":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/3454294218":"2026-03-10T14:29:33.188169+0000","192.168.123.107:6800/3548929186":"2026-03-10T14:29:33.188169+0000","192.168.123.107:6801/3548929186":"2026-03-10T14:29:33.188169+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:32:32.537 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T14:32:32.537 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd dump --format=json 2026-03-09T14:32:32.915 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:32 vm07 bash[22585]: cluster 2026-03-09T14:32:31.004669+0000 mgr.y (mgr.24310) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:32.915 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:32 vm07 bash[22585]: audit 2026-03-09T14:32:32.474175+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.107:0/2023620047' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:32:32.915 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:32 vm07 bash[17480]: cluster 2026-03-09T14:32:31.004669+0000 mgr.y (mgr.24310) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:32.915 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:32 vm07 bash[17480]: audit 2026-03-09T14:32:32.474175+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.107:0/2023620047' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:32:33.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:32 vm11 bash[17885]: cluster 2026-03-09T14:32:31.004669+0000 mgr.y (mgr.24310) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:33.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:32 vm11 bash[17885]: audit 2026-03-09T14:32:32.474175+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.107:0/2023620047' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:32:34.045 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.045 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.045 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.045 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.045 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.045 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.045 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.045 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.046 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.046 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.046 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.046 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.046 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.046 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.046 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.046 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.046 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.046 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.415 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:34 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 systemd[1]: Started Ceph prometheus.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.193Z caller=main.go:475 level=info msg="No time or size retention was set so using the default time retention" duration=15d 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.193Z caller=main.go:512 level=info msg="Starting Prometheus" version="(version=2.33.4, branch=HEAD, revision=83032011a5d3e6102624fe58241a374a7201fee8)" 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.193Z caller=main.go:517 level=info build_context="(go=go1.17.7, user=root@d13bf69e7be8, date=20220222-16:51:28)" 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.193Z caller=main.go:518 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm11 (none))" 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.194Z caller=main.go:519 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.194Z caller=main.go:520 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.195Z caller=web.go:570 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.195Z caller=main.go:923 level=info msg="Starting TSDB ..." 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.196Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.198Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.200Z caller=head.go:527 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.307µs 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.200Z caller=head.go:533 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.201Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.201Z caller=head.go:610 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=301.549µs wal_replay_duration=829.829µs total_replay_duration=1.161586ms 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.202Z caller=main.go:944 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.202Z caller=main.go:947 level=info msg="TSDB started" 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.202Z caller=main.go:1128 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.211Z caller=main.go:1165 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=9.702339ms db_storage=841ns remote_storage=1.303µs web_handler=561ns query_engine=992ns scrape=897.642µs scrape_sd=32.472µs notify=611ns notify_sd=1.103µs rules=8.526083ms 2026-03-09T14:32:34.509 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:34 vm11 bash[33090]: ts=2026-03-09T14:32:34.211Z caller=main.go:896 level=info msg="Server is ready to receive web requests." 2026-03-09T14:32:34.915 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:34 vm07 bash[22585]: cluster 2026-03-09T14:32:33.004991+0000 mgr.y (mgr.24310) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:34.915 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:34 vm07 bash[22585]: audit 2026-03-09T14:32:34.072454+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:34.915 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:34 vm07 bash[17480]: cluster 2026-03-09T14:32:33.004991+0000 mgr.y (mgr.24310) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:34.915 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:34 vm07 bash[17480]: audit 2026-03-09T14:32:34.072454+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:35.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:34 vm11 bash[17885]: cluster 2026-03-09T14:32:33.004991+0000 mgr.y (mgr.24310) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:35.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:34 vm11 bash[17885]: audit 2026-03-09T14:32:34.072454+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:35.163 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:35.518 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:32:35.518 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":50,"fsid":"f59f9828-1bc3-11f1-bfd8-7b3d0c866040","created":"2026-03-09T14:29:19.844551+0000","modified":"2026-03-09T14:32:22.013680+0000","last_up_change":"2026-03-09T14:32:09.277693+0000","last_in_change":"2026-03-09T14:31:55.753851+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T14:30:54.519768+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"01f1c7a2-0d56-449a-98b5-2d0134c34758","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":47,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6803","nonce":3608472040}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6805","nonce":3608472040}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6809","nonce":3608472040}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":3608472040},{"type":"v1","addr":"192.168.123.107:6807","nonce":3608472040}]},"public_addr":"192.168.123.107:6803/3608472040","cluster_addr":"192.168.123.107:6805/3608472040","heartbeat_back_addr":"192.168.123.107:6809/3608472040","heartbeat_front_addr":"192.168.123.107:6807/3608472040","state":["exists","up"]},{"osd":1,"uuid":"c5bcdd68-0c8f-46dc-8a25-561605efa0ff","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":31,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6811","nonce":2809750614}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6813","nonce":2809750614}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6817","nonce":2809750614}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":2809750614},{"type":"v1","addr":"192.168.123.107:6815","nonce":2809750614}]},"public_addr":"192.168.123.107:6811/2809750614","cluster_addr":"192.168.123.107:6813/2809750614","heartbeat_back_addr":"192.168.123.107:6817/2809750614","heartbeat_front_addr":"192.168.123.107:6815/2809750614","state":["exists","up"]},{"osd":2,"uuid":"6878f209-d828-467d-8a66-6cca096732a5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6819","nonce":2936867491}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6821","nonce":2936867491}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6825","nonce":2936867491}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":2936867491},{"type":"v1","addr":"192.168.123.107:6823","nonce":2936867491}]},"public_addr":"192.168.123.107:6819/2936867491","cluster_addr":"192.168.123.107:6821/2936867491","heartbeat_back_addr":"192.168.123.107:6825/2936867491","heartbeat_front_addr":"192.168.123.107:6823/2936867491","state":["exists","up"]},{"osd":3,"uuid":"afc54d82-66a7-42e1-83c1-0970428ef794","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6827","nonce":2142580280}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6829","nonce":2142580280}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6832","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6833","nonce":2142580280}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6830","nonce":2142580280},{"type":"v1","addr":"192.168.123.107:6831","nonce":2142580280}]},"public_addr":"192.168.123.107:6827/2142580280","cluster_addr":"192.168.123.107:6829/2142580280","heartbeat_back_addr":"192.168.123.107:6833/2142580280","heartbeat_front_addr":"192.168.123.107:6831/2142580280","state":["exists","up"]},{"osd":4,"uuid":"8e6cc346-4281-49a1-9886-18c25e9addfc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":30,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6800","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6801","nonce":2733246535}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6802","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6803","nonce":2733246535}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6806","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6807","nonce":2733246535}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6804","nonce":2733246535},{"type":"v1","addr":"192.168.123.111:6805","nonce":2733246535}]},"public_addr":"192.168.123.111:6801/2733246535","cluster_addr":"192.168.123.111:6803/2733246535","heartbeat_back_addr":"192.168.123.111:6807/2733246535","heartbeat_front_addr":"192.168.123.111:6805/2733246535","state":["exists","up"]},{"osd":5,"uuid":"104be397-ca1c-4a2d-ae2d-97efa37d095a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":36,"up_thru":37,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6808","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6809","nonce":122506048}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6810","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6811","nonce":122506048}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6814","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6815","nonce":122506048}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6812","nonce":122506048},{"type":"v1","addr":"192.168.123.111:6813","nonce":122506048}]},"public_addr":"192.168.123.111:6809/122506048","cluster_addr":"192.168.123.111:6811/122506048","heartbeat_back_addr":"192.168.123.111:6815/122506048","heartbeat_front_addr":"192.168.123.111:6813/122506048","state":["exists","up"]},{"osd":6,"uuid":"77a63107-dca7-4e61-85ab-633ea82bcb7d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":42,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6816","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6817","nonce":615402579}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6818","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6819","nonce":615402579}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6822","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6823","nonce":615402579}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6820","nonce":615402579},{"type":"v1","addr":"192.168.123.111:6821","nonce":615402579}]},"public_addr":"192.168.123.111:6817/615402579","cluster_addr":"192.168.123.111:6819/615402579","heartbeat_back_addr":"192.168.123.111:6823/615402579","heartbeat_front_addr":"192.168.123.111:6821/615402579","state":["exists","up"]},{"osd":7,"uuid":"abdf6bc5-5826-4388-bb2b-2d627c14c61b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":48,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6824","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6825","nonce":2351382271}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6826","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6827","nonce":2351382271}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6830","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6831","nonce":2351382271}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6828","nonce":2351382271},{"type":"v1","addr":"192.168.123.111:6829","nonce":2351382271}]},"public_addr":"192.168.123.111:6825/2351382271","cluster_addr":"192.168.123.111:6827/2351382271","heartbeat_back_addr":"192.168.123.111:6831/2351382271","heartbeat_front_addr":"192.168.123.111:6829/2351382271","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:30:21.765859+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:30:37.134085+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:30:52.232648+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:08.073169+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:22.394871+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:37.533651+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:31:51.207510+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:32:07.306062+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.107:0/142611623":"2026-03-10T14:32:22.013656+0000","192.168.123.107:6801/3123223907":"2026-03-10T14:32:22.013656+0000","192.168.123.107:6800/3123223907":"2026-03-10T14:32:22.013656+0000","192.168.123.107:0/1711405865":"2026-03-10T14:32:22.013656+0000","192.168.123.107:0/2850874332":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/1354642186":"2026-03-10T14:32:22.013656+0000","192.168.123.107:0/2030379457":"2026-03-10T14:29:42.227300+0000","192.168.123.107:6801/735153467":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/2859869932":"2026-03-10T14:32:22.013656+0000","192.168.123.107:0/1327540493":"2026-03-10T14:29:33.188169+0000","192.168.123.107:0/1561120863":"2026-03-10T14:29:33.188169+0000","192.168.123.107:6800/735153467":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/1541928502":"2026-03-10T14:29:42.227300+0000","192.168.123.107:0/3454294218":"2026-03-10T14:29:33.188169+0000","192.168.123.107:6800/3548929186":"2026-03-10T14:29:33.188169+0000","192.168.123.107:6801/3548929186":"2026-03-10T14:29:33.188169+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:32:35.575 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph tell osd.0 flush_pg_stats 2026-03-09T14:32:35.575 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph tell osd.1 flush_pg_stats 2026-03-09T14:32:35.575 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph tell osd.2 flush_pg_stats 2026-03-09T14:32:35.575 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph tell osd.3 flush_pg_stats 2026-03-09T14:32:35.575 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph tell osd.4 flush_pg_stats 2026-03-09T14:32:35.576 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph tell osd.5 flush_pg_stats 2026-03-09T14:32:35.576 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph tell osd.6 flush_pg_stats 2026-03-09T14:32:35.576 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph tell osd.7 flush_pg_stats 2026-03-09T14:32:35.915 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:35 vm07 bash[17480]: cephadm 2026-03-09T14:32:34.077463+0000 mgr.y (mgr.24310) 23 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T14:32:35.915 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:35 vm07 bash[17480]: audit 2026-03-09T14:32:35.515480+0000 mon.b (mon.2) 53 : audit [DBG] from='client.? 192.168.123.107:0/3790723890' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:32:35.915 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:35 vm07 bash[22585]: cephadm 2026-03-09T14:32:34.077463+0000 mgr.y (mgr.24310) 23 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T14:32:35.915 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:35 vm07 bash[22585]: audit 2026-03-09T14:32:35.515480+0000 mon.b (mon.2) 53 : audit [DBG] from='client.? 192.168.123.107:0/3790723890' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:32:36.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:35 vm11 bash[17885]: cephadm 2026-03-09T14:32:34.077463+0000 mgr.y (mgr.24310) 23 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T14:32:36.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:35 vm11 bash[17885]: audit 2026-03-09T14:32:35.515480+0000 mon.b (mon.2) 53 : audit [DBG] from='client.? 192.168.123.107:0/3790723890' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:32:36.915 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:36 vm07 bash[22585]: cluster 2026-03-09T14:32:35.005397+0000 mgr.y (mgr.24310) 24 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:36.915 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:36 vm07 bash[17480]: cluster 2026-03-09T14:32:35.005397+0000 mgr.y (mgr.24310) 24 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:37.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:36 vm11 bash[17885]: cluster 2026-03-09T14:32:35.005397+0000 mgr.y (mgr.24310) 24 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:37.859 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:37.859 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:37.859 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:37.859 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:37.859 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:37.859 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:37.859 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:37.860 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.166 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.166 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.166 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.167 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.167 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.167 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.167 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.167 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.167 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.167 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.167 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.167 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:38 vm07 systemd[1]: Started Ceph alertmanager.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:32:38.498 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:38.502 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:38.502 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:38.503 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:38.504 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:38.506 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:38.507 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:38.508 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:38.508 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:38 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:38.535 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[38490]: level=info ts=2026-03-09T14:32:38.224Z caller=main.go:225 msg="Starting Alertmanager" version="(version=0.23.0, branch=HEAD, revision=61046b17771a57cfd4c4a51be370ab930a4d7d54)" 2026-03-09T14:32:38.535 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[38490]: level=info ts=2026-03-09T14:32:38.224Z caller=main.go:226 build_context="(go=go1.16.7, user=root@e21a959be8d2, date=20210825-10:48:55)" 2026-03-09T14:32:38.535 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[38490]: level=info ts=2026-03-09T14:32:38.225Z caller=cluster.go:184 component=cluster msg="setting advertise address explicitly" addr=192.168.123.107 port=9094 2026-03-09T14:32:38.535 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[38490]: level=info ts=2026-03-09T14:32:38.226Z caller=cluster.go:671 component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T14:32:38.535 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[38490]: level=info ts=2026-03-09T14:32:38.243Z caller=coordinator.go:113 component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T14:32:38.535 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[38490]: level=info ts=2026-03-09T14:32:38.243Z caller=coordinator.go:126 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T14:32:38.535 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[38490]: level=info ts=2026-03-09T14:32:38.245Z caller=main.go:518 msg=Listening address=:9093 2026-03-09T14:32:38.535 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[38490]: level=info ts=2026-03-09T14:32:38.245Z caller=tls_config.go:191 msg="TLS is disabled." http2=false 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:38 vm07 bash[22585]: cluster 2026-03-09T14:32:37.005829+0000 mgr.y (mgr.24310) 25 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:38 vm07 bash[22585]: audit 2026-03-09T14:32:38.097166+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:38 vm07 bash[22585]: audit 2026-03-09T14:32:38.102797+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:38 vm07 bash[22585]: audit 2026-03-09T14:32:38.122531+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:38 vm07 bash[22585]: audit 2026-03-09T14:32:38.127220+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:38 vm07 bash[22585]: audit 2026-03-09T14:32:38.133812+0000 mon.b (mon.2) 54 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:38 vm07 bash[22585]: audit 2026-03-09T14:32:38.149534+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[17480]: cluster 2026-03-09T14:32:37.005829+0000 mgr.y (mgr.24310) 25 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[17480]: audit 2026-03-09T14:32:38.097166+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[17480]: audit 2026-03-09T14:32:38.102797+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[17480]: audit 2026-03-09T14:32:38.122531+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[17480]: audit 2026-03-09T14:32:38.127220+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[17480]: audit 2026-03-09T14:32:38.133812+0000 mon.b (mon.2) 54 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:32:38.814 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:38 vm07 bash[17480]: audit 2026-03-09T14:32:38.149534+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:39.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:38 vm11 bash[17885]: cluster 2026-03-09T14:32:37.005829+0000 mgr.y (mgr.24310) 25 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:39.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:38 vm11 bash[17885]: audit 2026-03-09T14:32:38.097166+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:39.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:38 vm11 bash[17885]: audit 2026-03-09T14:32:38.102797+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:39.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:38 vm11 bash[17885]: audit 2026-03-09T14:32:38.122531+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:39.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:38 vm11 bash[17885]: audit 2026-03-09T14:32:38.127220+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:39.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:38 vm11 bash[17885]: audit 2026-03-09T14:32:38.133812+0000 mon.b (mon.2) 54 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:32:39.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:38 vm11 bash[17885]: audit 2026-03-09T14:32:38.149534+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:39.559 INFO:teuthology.orchestra.run.vm07.stdout:55834574873 2026-03-09T14:32:39.559 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd last-stat-seq osd.1 2026-03-09T14:32:39.716 INFO:teuthology.orchestra.run.vm07.stdout:34359738396 2026-03-09T14:32:39.716 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd last-stat-seq osd.0 2026-03-09T14:32:39.822 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:39 vm07 bash[22585]: audit 2026-03-09T14:32:38.134552+0000 mgr.y (mgr.24310) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:32:39.822 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:39 vm07 bash[22585]: cephadm 2026-03-09T14:32:38.160658+0000 mgr.y (mgr.24310) 27 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-09T14:32:39.823 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:39 vm07 bash[17480]: audit 2026-03-09T14:32:38.134552+0000 mgr.y (mgr.24310) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:32:39.823 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:39 vm07 bash[17480]: cephadm 2026-03-09T14:32:38.160658+0000 mgr.y (mgr.24310) 27 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-09T14:32:39.866 INFO:teuthology.orchestra.run.vm07.stdout:176093659146 2026-03-09T14:32:39.866 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd last-stat-seq osd.6 2026-03-09T14:32:39.955 INFO:teuthology.orchestra.run.vm07.stdout:77309411350 2026-03-09T14:32:39.955 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd last-stat-seq osd.2 2026-03-09T14:32:39.971 INFO:teuthology.orchestra.run.vm07.stdout:201863462919 2026-03-09T14:32:39.971 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd last-stat-seq osd.7 2026-03-09T14:32:40.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:39 vm11 bash[17885]: audit 2026-03-09T14:32:38.134552+0000 mgr.y (mgr.24310) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:32:40.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:39 vm11 bash[17885]: cephadm 2026-03-09T14:32:38.160658+0000 mgr.y (mgr.24310) 27 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-09T14:32:40.072 INFO:teuthology.orchestra.run.vm07.stdout:128849018896 2026-03-09T14:32:40.073 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd last-stat-seq osd.4 2026-03-09T14:32:40.094 INFO:teuthology.orchestra.run.vm07.stdout:107374182420 2026-03-09T14:32:40.094 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd last-stat-seq osd.3 2026-03-09T14:32:40.095 INFO:teuthology.orchestra.run.vm07.stdout:154618822669 2026-03-09T14:32:40.095 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph osd last-stat-seq osd.5 2026-03-09T14:32:40.414 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:40 vm07 bash[38490]: level=info ts=2026-03-09T14:32:40.230Z caller=cluster.go:696 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.00401893s 2026-03-09T14:32:41.008 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:40 vm11 bash[17885]: cluster 2026-03-09T14:32:39.006151+0000 mgr.y (mgr.24310) 28 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:41.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:40 vm07 bash[22585]: cluster 2026-03-09T14:32:39.006151+0000 mgr.y (mgr.24310) 28 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:41.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:40 vm07 bash[17480]: cluster 2026-03-09T14:32:39.006151+0000 mgr.y (mgr.24310) 28 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:42.008 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:41 vm11 bash[17885]: cluster 2026-03-09T14:32:41.006474+0000 mgr.y (mgr.24310) 29 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:42.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:41 vm07 bash[22585]: cluster 2026-03-09T14:32:41.006474+0000 mgr.y (mgr.24310) 29 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:42.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:41 vm07 bash[17480]: cluster 2026-03-09T14:32:41.006474+0000 mgr.y (mgr.24310) 29 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:42.301 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:42.302 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:42.304 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:42.305 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:42.305 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:42.309 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:42.310 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:42.312 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:42.589 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:42 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:32:42] "GET /metrics HTTP/1.1" 200 191122 "" "Prometheus/2.33.4" 2026-03-09T14:32:43.209 INFO:teuthology.orchestra.run.vm07.stdout:176093659146 2026-03-09T14:32:43.535 INFO:tasks.cephadm.ceph_manager.ceph:need seq 176093659146 got 176093659146 for osd.6 2026-03-09T14:32:43.535 DEBUG:teuthology.parallel:result is None 2026-03-09T14:32:43.560 INFO:teuthology.orchestra.run.vm07.stdout:34359738396 2026-03-09T14:32:43.615 INFO:teuthology.orchestra.run.vm07.stdout:128849018896 2026-03-09T14:32:43.718 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738396 got 34359738396 for osd.0 2026-03-09T14:32:43.718 DEBUG:teuthology.parallel:result is None 2026-03-09T14:32:43.758 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:43 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:32:43] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:32:43.761 INFO:teuthology.orchestra.run.vm07.stdout:201863462919 2026-03-09T14:32:43.767 INFO:tasks.cephadm.ceph_manager.ceph:need seq 128849018896 got 128849018896 for osd.4 2026-03-09T14:32:43.767 DEBUG:teuthology.parallel:result is None 2026-03-09T14:32:43.822 INFO:teuthology.orchestra.run.vm07.stdout:107374182420 2026-03-09T14:32:43.855 INFO:tasks.cephadm.ceph_manager.ceph:need seq 201863462919 got 201863462919 for osd.7 2026-03-09T14:32:43.855 DEBUG:teuthology.parallel:result is None 2026-03-09T14:32:43.894 INFO:teuthology.orchestra.run.vm07.stdout:55834574873 2026-03-09T14:32:43.941 INFO:tasks.cephadm.ceph_manager.ceph:need seq 107374182420 got 107374182420 for osd.3 2026-03-09T14:32:43.941 DEBUG:teuthology.parallel:result is None 2026-03-09T14:32:43.959 INFO:teuthology.orchestra.run.vm07.stdout:154618822669 2026-03-09T14:32:43.993 INFO:teuthology.orchestra.run.vm07.stdout:77309411350 2026-03-09T14:32:44.014 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574873 got 55834574873 for osd.1 2026-03-09T14:32:44.015 DEBUG:teuthology.parallel:result is None 2026-03-09T14:32:44.054 INFO:tasks.cephadm.ceph_manager.ceph:need seq 154618822669 got 154618822669 for osd.5 2026-03-09T14:32:44.054 DEBUG:teuthology.parallel:result is None 2026-03-09T14:32:44.108 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:44 vm07 bash[17480]: cluster 2026-03-09T14:32:43.006788+0000 mgr.y (mgr.24310) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:44.108 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:44 vm07 bash[17480]: audit 2026-03-09T14:32:43.104396+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:44.112 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411350 got 77309411350 for osd.2 2026-03-09T14:32:44.112 DEBUG:teuthology.parallel:result is None 2026-03-09T14:32:44.112 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T14:32:44.112 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph pg dump --format=json 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:44 vm07 bash[22585]: cluster 2026-03-09T14:32:43.006788+0000 mgr.y (mgr.24310) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:44 vm07 bash[22585]: audit 2026-03-09T14:32:43.104396+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:44 vm07 bash[22585]: audit 2026-03-09T14:32:43.207521+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.107:0/160163430' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:44 vm07 bash[22585]: audit 2026-03-09T14:32:43.549476+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2746554875' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:44 vm07 bash[22585]: audit 2026-03-09T14:32:43.609860+0000 mon.b (mon.2) 55 : audit [DBG] from='client.? 192.168.123.107:0/357311343' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:44 vm07 bash[22585]: audit 2026-03-09T14:32:43.760127+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.107:0/231060796' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:44 vm07 bash[22585]: audit 2026-03-09T14:32:43.813885+0000 mon.a (mon.0) 607 : audit [DBG] from='client.? 192.168.123.107:0/185692843' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:44 vm07 bash[22585]: audit 2026-03-09T14:32:43.890736+0000 mon.a (mon.0) 608 : audit [DBG] from='client.? 192.168.123.107:0/3028869341' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:44 vm07 bash[22585]: audit 2026-03-09T14:32:43.956963+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.107:0/1155769589' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:44 vm07 bash[22585]: audit 2026-03-09T14:32:43.990338+0000 mon.a (mon.0) 609 : audit [DBG] from='client.? 192.168.123.107:0/3445804350' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:32:44.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:44 vm07 bash[17480]: audit 2026-03-09T14:32:43.207521+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.107:0/160163430' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:32:44.415 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:44 vm07 bash[17480]: audit 2026-03-09T14:32:43.549476+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2746554875' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:32:44.415 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:44 vm07 bash[17480]: audit 2026-03-09T14:32:43.609860+0000 mon.b (mon.2) 55 : audit [DBG] from='client.? 192.168.123.107:0/357311343' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:32:44.415 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:44 vm07 bash[17480]: audit 2026-03-09T14:32:43.760127+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.107:0/231060796' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:32:44.415 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:44 vm07 bash[17480]: audit 2026-03-09T14:32:43.813885+0000 mon.a (mon.0) 607 : audit [DBG] from='client.? 192.168.123.107:0/185692843' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:32:44.415 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:44 vm07 bash[17480]: audit 2026-03-09T14:32:43.890736+0000 mon.a (mon.0) 608 : audit [DBG] from='client.? 192.168.123.107:0/3028869341' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:32:44.415 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:44 vm07 bash[17480]: audit 2026-03-09T14:32:43.956963+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.107:0/1155769589' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:32:44.415 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:44 vm07 bash[17480]: audit 2026-03-09T14:32:43.990338+0000 mon.a (mon.0) 609 : audit [DBG] from='client.? 192.168.123.107:0/3445804350' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:32:44.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:44 vm11 bash[17885]: cluster 2026-03-09T14:32:43.006788+0000 mgr.y (mgr.24310) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:44.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:44 vm11 bash[17885]: audit 2026-03-09T14:32:43.104396+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:44.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:44 vm11 bash[17885]: audit 2026-03-09T14:32:43.207521+0000 mon.a (mon.0) 606 : audit [DBG] from='client.? 192.168.123.107:0/160163430' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:32:44.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:44 vm11 bash[17885]: audit 2026-03-09T14:32:43.549476+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2746554875' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:32:44.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:44 vm11 bash[17885]: audit 2026-03-09T14:32:43.609860+0000 mon.b (mon.2) 55 : audit [DBG] from='client.? 192.168.123.107:0/357311343' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:32:44.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:44 vm11 bash[17885]: audit 2026-03-09T14:32:43.760127+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.107:0/231060796' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:32:44.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:44 vm11 bash[17885]: audit 2026-03-09T14:32:43.813885+0000 mon.a (mon.0) 607 : audit [DBG] from='client.? 192.168.123.107:0/185692843' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:32:44.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:44 vm11 bash[17885]: audit 2026-03-09T14:32:43.890736+0000 mon.a (mon.0) 608 : audit [DBG] from='client.? 192.168.123.107:0/3028869341' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:32:44.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:44 vm11 bash[17885]: audit 2026-03-09T14:32:43.956963+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.107:0/1155769589' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:32:44.509 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:44 vm11 bash[17885]: audit 2026-03-09T14:32:43.990338+0000 mon.a (mon.0) 609 : audit [DBG] from='client.? 192.168.123.107:0/3445804350' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:32:46.722 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:46.889 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:46 vm07 bash[22585]: cluster 2026-03-09T14:32:45.007073+0000 mgr.y (mgr.24310) 31 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:46.889 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:46 vm07 bash[17480]: cluster 2026-03-09T14:32:45.007073+0000 mgr.y (mgr.24310) 31 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:47.008 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:46 vm11 bash[17885]: cluster 2026-03-09T14:32:45.007073+0000 mgr.y (mgr.24310) 31 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:47.062 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:32:47.065 INFO:teuthology.orchestra.run.vm07.stderr:dumped all 2026-03-09T14:32:47.113 INFO:teuthology.orchestra.run.vm07.stdout:{"pg_ready":true,"pg_map":{"version":15,"stamp":"2026-03-09T14:32:47.007204+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":49644,"kb_used_data":4908,"kb_used_omap":0,"kb_used_meta":44672,"kb_avail":167689748,"statfs":{"total":171765137408,"available":171714301952,"internally_reserved":0,"allocated":5025792,"data_stored":2740177,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":45744128},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002016"},"pg_stats":[{"pgid":"1.0","version":"50'87","reported_seq":56,"reported_epoch":50,"state":"active+clean","last_fresh":"2026-03-09T14:32:23.099860+0000","last_change":"2026-03-09T14:32:11.612038+0000","last_active":"2026-03-09T14:32:23.099860+0000","last_peered":"2026-03-09T14:32:23.099860+0000","last_clean":"2026-03-09T14:32:23.099860+0000","last_became_active":"2026-03-09T14:32:11.303998+0000","last_became_peered":"2026-03-09T14:32:11.303998+0000","last_unstale":"2026-03-09T14:32:23.099860+0000","last_undegraded":"2026-03-09T14:32:23.099860+0000","last_fullsized":"2026-03-09T14:32:23.099860+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:30:54.631862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:30:54.631862+0000","last_clean_scrub_stamp":"2026-03-09T14:30:54.631862+0000","objects_scrubbed":0,"log_size":87,"ondisk_log_size":87,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:37:50.497527+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1204224,"data_stored":1193520,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":47,"seq":201863462920,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6180,"kb_used_data":860,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961244,"statfs":{"total":21470642176,"available":21464313856,"internally_reserved":0,"allocated":880640,"data_stored":591277,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.76200000000000001}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.72699999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.81200000000000006}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.84099999999999997}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.80700000000000005}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.73999999999999999}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.83599999999999997}]}]},{"osd":6,"up_from":41,"seq":176093659147,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6176,"kb_used_data":856,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961248,"statfs":{"total":21470642176,"available":21464317952,"internally_reserved":0,"allocated":876544,"data_stored":591033,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.875}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.749}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.70999999999999996}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.65000000000000002}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.79000000000000004}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.83299999999999996}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.75900000000000001}]}]},{"osd":1,"up_from":13,"seq":55834574874,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6424,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5952,"kb_avail":20961000,"statfs":{"total":21470642176,"available":21464064000,"internally_reserved":0,"allocated":475136,"data_stored":193193,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6094848},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 14:31:39 2026","interfaces":[{"interface":"back","average":{"1min":0.376,"5min":0.376,"15min":0.376},"min":{"1min":0.19,"5min":0.19,"15min":0.19},"max":{"1min":0.71199999999999997,"5min":0.71199999999999997,"15min":0.71199999999999997},"last":1.0489999999999999},{"interface":"front","average":{"1min":0.374,"5min":0.374,"15min":0.374},"min":{"1min":0.215,"5min":0.215,"15min":0.215},"max":{"1min":0.88500000000000001,"5min":0.88500000000000001,"15min":0.88500000000000001},"last":1.0009999999999999}]},{"osd":2,"last update":"Mon Mar 9 14:31:54 2026","interfaces":[{"interface":"back","average":{"1min":0.47399999999999998,"5min":0.47399999999999998,"15min":0.47399999999999998},"min":{"1min":0.25,"5min":0.25,"15min":0.25},"max":{"1min":0.89600000000000002,"5min":0.89600000000000002,"15min":0.89600000000000002},"last":0.98799999999999999},{"interface":"front","average":{"1min":0.40899999999999997,"5min":0.40899999999999997,"15min":0.40899999999999997},"min":{"1min":0.188,"5min":0.188,"15min":0.188},"max":{"1min":0.878,"5min":0.878,"15min":0.878},"last":0.96199999999999997}]},{"osd":3,"last update":"Mon Mar 9 14:32:12 2026","interfaces":[{"interface":"back","average":{"1min":0.56100000000000005,"5min":0.56100000000000005,"15min":0.56100000000000005},"min":{"1min":0.29599999999999999,"5min":0.29599999999999999,"15min":0.29599999999999999},"max":{"1min":0.95299999999999996,"5min":0.95299999999999996,"15min":0.95299999999999996},"last":0.38300000000000001},{"interface":"front","average":{"1min":0.54100000000000004,"5min":0.54100000000000004,"15min":0.54100000000000004},"min":{"1min":0.33200000000000002,"5min":0.33200000000000002,"15min":0.33200000000000002},"max":{"1min":0.92000000000000004,"5min":0.92000000000000004,"15min":0.92000000000000004},"last":1.1200000000000001}]},{"osd":4,"last update":"Mon Mar 9 14:32:25 2026","interfaces":[{"interface":"back","average":{"1min":0.64400000000000002,"5min":0.64400000000000002,"15min":0.64400000000000002},"min":{"1min":0.33900000000000002,"5min":0.33900000000000002,"15min":0.33900000000000002},"max":{"1min":1.101,"5min":1.101,"15min":1.101},"last":1.01},{"interface":"front","average":{"1min":0.86399999999999999,"5min":0.86399999999999999,"15min":0.86399999999999999},"min":{"1min":0.39200000000000002,"5min":0.39200000000000002,"15min":0.39200000000000002},"max":{"1min":4.9809999999999999,"5min":4.9809999999999999,"15min":4.9809999999999999},"last":1.042}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.0169999999999999}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.079}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.93700000000000006}]}]},{"osd":0,"up_from":8,"seq":34359738397,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6884,"kb_used_data":860,"kb_used_omap":0,"kb_used_meta":6016,"kb_avail":20960540,"statfs":{"total":21470642176,"available":21463592960,"internally_reserved":0,"allocated":880640,"data_stored":591277,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6160384},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Mon Mar 9 14:31:43 2026","interfaces":[{"interface":"back","average":{"1min":0.46500000000000002,"5min":0.46500000000000002,"15min":0.46500000000000002},"min":{"1min":0.193,"5min":0.193,"15min":0.193},"max":{"1min":1.387,"5min":1.387,"15min":1.387},"last":0.80100000000000005},{"interface":"front","average":{"1min":0.38400000000000001,"5min":0.38400000000000001,"15min":0.38400000000000001},"min":{"1min":0.17699999999999999,"5min":0.17699999999999999,"15min":0.17699999999999999},"max":{"1min":0.96799999999999997,"5min":0.96799999999999997,"15min":0.96799999999999997},"last":0.49399999999999999}]},{"osd":2,"last update":"Mon Mar 9 14:31:54 2026","interfaces":[{"interface":"back","average":{"1min":0.434,"5min":0.434,"15min":0.434},"min":{"1min":0.219,"5min":0.219,"15min":0.219},"max":{"1min":0.754,"5min":0.754,"15min":0.754},"last":0.50800000000000001},{"interface":"front","average":{"1min":0.51900000000000002,"5min":0.51900000000000002,"15min":0.51900000000000002},"min":{"1min":0.224,"5min":0.224,"15min":0.224},"max":{"1min":1.4570000000000001,"5min":1.4570000000000001,"15min":1.4570000000000001},"last":0.52900000000000003}]},{"osd":3,"last update":"Mon Mar 9 14:32:11 2026","interfaces":[{"interface":"back","average":{"1min":0.56200000000000006,"5min":0.56200000000000006,"15min":0.56200000000000006},"min":{"1min":0.32600000000000001,"5min":0.32600000000000001,"15min":0.32600000000000001},"max":{"1min":1.153,"5min":1.153,"15min":1.153},"last":0.66000000000000003},{"interface":"front","average":{"1min":0.61499999999999999,"5min":0.61499999999999999,"15min":0.61499999999999999},"min":{"1min":0.28499999999999998,"5min":0.28499999999999998,"15min":0.28499999999999998},"max":{"1min":0.93300000000000005,"5min":0.93300000000000005,"15min":0.93300000000000005},"last":0.60499999999999998}]},{"osd":4,"last update":"Mon Mar 9 14:32:27 2026","interfaces":[{"interface":"back","average":{"1min":0.70299999999999996,"5min":0.70299999999999996,"15min":0.70299999999999996},"min":{"1min":0.502,"5min":0.502,"15min":0.502},"max":{"1min":1.4079999999999999,"5min":1.4079999999999999,"15min":1.4079999999999999},"last":0.82399999999999995},{"interface":"front","average":{"1min":0.80500000000000005,"5min":0.80500000000000005,"15min":0.80500000000000005},"min":{"1min":0.47999999999999998,"5min":0.47999999999999998,"15min":0.47999999999999998},"max":{"1min":1.756,"5min":1.756,"15min":1.756},"last":0.59599999999999997}]},{"osd":5,"last update":"Mon Mar 9 14:32:40 2026","interfaces":[{"interface":"back","average":{"1min":0.78900000000000003,"5min":0.78900000000000003,"15min":0.78900000000000003},"min":{"1min":0.42299999999999999,"5min":0.42299999999999999,"15min":0.42299999999999999},"max":{"1min":1.4530000000000001,"5min":1.4530000000000001,"15min":1.4530000000000001},"last":0.78600000000000003},{"interface":"front","average":{"1min":0.77900000000000003,"5min":0.77900000000000003,"15min":0.77900000000000003},"min":{"1min":0.42399999999999999,"5min":0.42399999999999999,"15min":0.42399999999999999},"max":{"1min":1.3600000000000001,"5min":1.3600000000000001,"15min":1.3600000000000001},"last":0.77300000000000002}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.70099999999999996}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.81499999999999995}]}]},{"osd":2,"up_from":18,"seq":77309411351,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6364,"kb_used_data":468,"kb_used_omap":0,"kb_used_meta":5888,"kb_avail":20961060,"statfs":{"total":21470642176,"available":21464125440,"internally_reserved":0,"allocated":479232,"data_stored":193437,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6029312},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 14:31:54 2026","interfaces":[{"interface":"back","average":{"1min":0.39300000000000002,"5min":0.39300000000000002,"15min":0.39300000000000002},"min":{"1min":0.158,"5min":0.158,"15min":0.158},"max":{"1min":0.73599999999999999,"5min":0.73599999999999999,"15min":0.73599999999999999},"last":0.48499999999999999},{"interface":"front","average":{"1min":0.50800000000000001,"5min":0.50800000000000001,"15min":0.50800000000000001},"min":{"1min":0.27700000000000002,"5min":0.27700000000000002,"15min":0.27700000000000002},"max":{"1min":1.042,"5min":1.042,"15min":1.042},"last":0.51200000000000001}]},{"osd":1,"last update":"Mon Mar 9 14:31:54 2026","interfaces":[{"interface":"back","average":{"1min":0.5,"5min":0.5,"15min":0.5},"min":{"1min":0.29699999999999999,"5min":0.29699999999999999,"15min":0.29699999999999999},"max":{"1min":1.0629999999999999,"5min":1.0629999999999999,"15min":1.0629999999999999},"last":0.441},{"interface":"front","average":{"1min":0.48699999999999999,"5min":0.48699999999999999,"15min":0.48699999999999999},"min":{"1min":0.315,"5min":0.315,"15min":0.315},"max":{"1min":0.89100000000000001,"5min":0.89100000000000001,"15min":0.89100000000000001},"last":0.49299999999999999}]},{"osd":3,"last update":"Mon Mar 9 14:32:10 2026","interfaces":[{"interface":"back","average":{"1min":0.60699999999999998,"5min":0.60699999999999998,"15min":0.60699999999999998},"min":{"1min":0.30599999999999999,"5min":0.30599999999999999,"15min":0.30599999999999999},"max":{"1min":1.244,"5min":1.244,"15min":1.244},"last":0.68000000000000005},{"interface":"front","average":{"1min":0.54100000000000004,"5min":0.54100000000000004,"15min":0.54100000000000004},"min":{"1min":0.311,"5min":0.311,"15min":0.311},"max":{"1min":0.755,"5min":0.755,"15min":0.755},"last":0.625}]},{"osd":4,"last update":"Mon Mar 9 14:32:26 2026","interfaces":[{"interface":"back","average":{"1min":0.69499999999999995,"5min":0.69499999999999995,"15min":0.69499999999999995},"min":{"1min":0.32400000000000001,"5min":0.32400000000000001,"15min":0.32400000000000001},"max":{"1min":1.169,"5min":1.169,"15min":1.169},"last":0.66200000000000003},{"interface":"front","average":{"1min":0.71699999999999997,"5min":0.71699999999999997,"15min":0.71699999999999997},"min":{"1min":0.38700000000000001,"5min":0.38700000000000001,"15min":0.38700000000000001},"max":{"1min":1.2430000000000001,"5min":1.2430000000000001,"15min":1.2430000000000001},"last":0.505}]},{"osd":5,"last update":"Mon Mar 9 14:32:41 2026","interfaces":[{"interface":"back","average":{"1min":0.72499999999999998,"5min":0.72499999999999998,"15min":0.72499999999999998},"min":{"1min":0.40200000000000002,"5min":0.40200000000000002,"15min":0.40200000000000002},"max":{"1min":1.2609999999999999,"5min":1.2609999999999999,"15min":1.2609999999999999},"last":0.52000000000000002},{"interface":"front","average":{"1min":0.70799999999999996,"5min":0.70799999999999996,"15min":0.70799999999999996},"min":{"1min":0.46000000000000002,"5min":0.46000000000000002,"15min":0.46000000000000002},"max":{"1min":1.2090000000000001,"5min":1.2090000000000001,"15min":1.2090000000000001},"last":0.46000000000000002}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47499999999999998}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.42799999999999999}]}]},{"osd":3,"up_from":25,"seq":107374182421,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5916,"kb_used_data":468,"kb_used_omap":0,"kb_used_meta":5440,"kb_avail":20961508,"statfs":{"total":21470642176,"available":21464584192,"internally_reserved":0,"allocated":479232,"data_stored":193437,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5570560},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 14:32:12 2026","interfaces":[{"interface":"back","average":{"1min":0.48199999999999998,"5min":0.48199999999999998,"15min":0.48199999999999998},"min":{"1min":0.27900000000000003,"5min":0.27900000000000003,"15min":0.27900000000000003},"max":{"1min":0.77000000000000002,"5min":0.77000000000000002,"15min":0.77000000000000002},"last":0.49199999999999999},{"interface":"front","average":{"1min":0.502,"5min":0.502,"15min":0.502},"min":{"1min":0.33300000000000002,"5min":0.33300000000000002,"15min":0.33300000000000002},"max":{"1min":1.0649999999999999,"5min":1.0649999999999999,"15min":1.0649999999999999},"last":0.68300000000000005}]},{"osd":1,"last update":"Mon Mar 9 14:32:12 2026","interfaces":[{"interface":"back","average":{"1min":0.60399999999999998,"5min":0.60399999999999998,"15min":0.60399999999999998},"min":{"1min":0.33500000000000002,"5min":0.33500000000000002,"15min":0.33500000000000002},"max":{"1min":0.94999999999999996,"5min":0.94999999999999996,"15min":0.94999999999999996},"last":0.45900000000000002},{"interface":"front","average":{"1min":0.60699999999999998,"5min":0.60699999999999998,"15min":0.60699999999999998},"min":{"1min":0.315,"5min":0.315,"15min":0.315},"max":{"1min":0.79500000000000004,"5min":0.79500000000000004,"15min":0.79500000000000004},"last":0.5}]},{"osd":2,"last update":"Mon Mar 9 14:32:12 2026","interfaces":[{"interface":"back","average":{"1min":0.55300000000000005,"5min":0.55300000000000005,"15min":0.55300000000000005},"min":{"1min":0.29599999999999999,"5min":0.29599999999999999,"15min":0.29599999999999999},"max":{"1min":0.80900000000000005,"5min":0.80900000000000005,"15min":0.80900000000000005},"last":0.629},{"interface":"front","average":{"1min":0.51400000000000001,"5min":0.51400000000000001,"15min":0.51400000000000001},"min":{"1min":0.32800000000000001,"5min":0.32800000000000001,"15min":0.32800000000000001},"max":{"1min":0.71799999999999997,"5min":0.71799999999999997,"15min":0.71799999999999997},"last":0.79400000000000004}]},{"osd":4,"last update":"Mon Mar 9 14:32:26 2026","interfaces":[{"interface":"back","average":{"1min":0.70299999999999996,"5min":0.70299999999999996,"15min":0.70299999999999996},"min":{"1min":0.437,"5min":0.437,"15min":0.437},"max":{"1min":1.032,"5min":1.032,"15min":1.032},"last":0.435},{"interface":"front","average":{"1min":0.71799999999999997,"5min":0.71799999999999997,"15min":0.71799999999999997},"min":{"1min":0.45100000000000001,"5min":0.45100000000000001,"15min":0.45100000000000001},"max":{"1min":1.1619999999999999,"5min":1.1619999999999999,"15min":1.1619999999999999},"last":0.61899999999999999}]},{"osd":5,"last update":"Mon Mar 9 14:32:41 2026","interfaces":[{"interface":"back","average":{"1min":0.73399999999999999,"5min":0.73399999999999999,"15min":0.73399999999999999},"min":{"1min":0.46200000000000002,"5min":0.46200000000000002,"15min":0.46200000000000002},"max":{"1min":1.629,"5min":1.629,"15min":1.629},"last":0.72399999999999998},{"interface":"front","average":{"1min":0.73299999999999998,"5min":0.73299999999999998,"15min":0.73299999999999998},"min":{"1min":0.39500000000000002,"5min":0.39500000000000002,"15min":0.39500000000000002},"max":{"1min":1.595,"5min":1.595,"15min":1.595},"last":0.44900000000000001}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.63900000000000001}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.76900000000000002}]}]},{"osd":4,"up_from":30,"seq":128849018897,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5852,"kb_used_data":468,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961572,"statfs":{"total":21470642176,"available":21464649728,"internally_reserved":0,"allocated":479232,"data_stored":193437,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 14:32:28 2026","interfaces":[{"interface":"back","average":{"1min":0.64900000000000002,"5min":0.64900000000000002,"15min":0.64900000000000002},"min":{"1min":0.32000000000000001,"5min":0.32000000000000001,"15min":0.32000000000000001},"max":{"1min":1.2849999999999999,"5min":1.2849999999999999,"15min":1.2849999999999999},"last":0.41099999999999998},{"interface":"front","average":{"1min":0.626,"5min":0.626,"15min":0.626},"min":{"1min":0.42199999999999999,"5min":0.42199999999999999,"15min":0.42199999999999999},"max":{"1min":0.98699999999999999,"5min":0.98699999999999999,"15min":0.98699999999999999},"last":0.68899999999999995}]},{"osd":1,"last update":"Mon Mar 9 14:32:28 2026","interfaces":[{"interface":"back","average":{"1min":0.61299999999999999,"5min":0.61299999999999999,"15min":0.61299999999999999},"min":{"1min":0.30599999999999999,"5min":0.30599999999999999,"15min":0.30599999999999999},"max":{"1min":0.82099999999999995,"5min":0.82099999999999995,"15min":0.82099999999999995},"last":0.755},{"interface":"front","average":{"1min":0.71999999999999997,"5min":0.71999999999999997,"15min":0.71999999999999997},"min":{"1min":0.33700000000000002,"5min":0.33700000000000002,"15min":0.33700000000000002},"max":{"1min":1.5169999999999999,"5min":1.5169999999999999,"15min":1.5169999999999999},"last":0.41899999999999998}]},{"osd":2,"last update":"Mon Mar 9 14:32:28 2026","interfaces":[{"interface":"back","average":{"1min":0.74199999999999999,"5min":0.74199999999999999,"15min":0.74199999999999999},"min":{"1min":0.33000000000000002,"5min":0.33000000000000002,"15min":0.33000000000000002},"max":{"1min":1.5049999999999999,"5min":1.5049999999999999,"15min":1.5049999999999999},"last":0.502},{"interface":"front","average":{"1min":0.64800000000000002,"5min":0.64800000000000002,"15min":0.64800000000000002},"min":{"1min":0.41799999999999998,"5min":0.41799999999999998,"15min":0.41799999999999998},"max":{"1min":1.085,"5min":1.085,"15min":1.085},"last":0.69899999999999995}]},{"osd":3,"last update":"Mon Mar 9 14:32:28 2026","interfaces":[{"interface":"back","average":{"1min":0.63,"5min":0.63,"15min":0.63},"min":{"1min":0.39800000000000002,"5min":0.39800000000000002,"15min":0.39800000000000002},"max":{"1min":0.98599999999999999,"5min":0.98599999999999999,"15min":0.98599999999999999},"last":0.74099999999999999},{"interface":"front","average":{"1min":0.60799999999999998,"5min":0.60799999999999998,"15min":0.60799999999999998},"min":{"1min":0.32300000000000001,"5min":0.32300000000000001,"15min":0.32300000000000001},"max":{"1min":0.94799999999999995,"5min":0.94799999999999995,"15min":0.94799999999999995},"last":0.77100000000000002}]},{"osd":5,"last update":"Mon Mar 9 14:32:41 2026","interfaces":[{"interface":"back","average":{"1min":0.59699999999999998,"5min":0.59699999999999998,"15min":0.59699999999999998},"min":{"1min":0.30099999999999999,"5min":0.30099999999999999,"15min":0.30099999999999999},"max":{"1min":1.002,"5min":1.002,"15min":1.002},"last":0.73199999999999998},{"interface":"front","average":{"1min":0.57899999999999996,"5min":0.57899999999999996,"15min":0.57899999999999996},"min":{"1min":0.36899999999999999,"5min":0.36899999999999999,"15min":0.36899999999999999},"max":{"1min":0.97899999999999998,"5min":0.97899999999999998,"15min":0.97899999999999998},"last":0.78100000000000003}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.39700000000000002}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.76400000000000001}]}]},{"osd":5,"up_from":36,"seq":154618822670,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5848,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961576,"statfs":{"total":21470642176,"available":21464653824,"internally_reserved":0,"allocated":475136,"data_stored":193086,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 14:32:44 2026","interfaces":[{"interface":"back","average":{"1min":0.628,"5min":0.628,"15min":0.628},"min":{"1min":0.30499999999999999,"5min":0.30499999999999999,"15min":0.30499999999999999},"max":{"1min":1.2190000000000001,"5min":1.2190000000000001,"15min":1.2190000000000001},"last":1.2190000000000001},{"interface":"front","average":{"1min":0.63,"5min":0.63,"15min":0.63},"min":{"1min":0.28299999999999997,"5min":0.28299999999999997,"15min":0.28299999999999997},"max":{"1min":1.0049999999999999,"5min":1.0049999999999999,"15min":1.0049999999999999},"last":0.64100000000000001}]},{"osd":1,"last update":"Mon Mar 9 14:32:44 2026","interfaces":[{"interface":"back","average":{"1min":0.69999999999999996,"5min":0.69999999999999996,"15min":0.69999999999999996},"min":{"1min":0.50900000000000001,"5min":0.50900000000000001,"15min":0.50900000000000001},"max":{"1min":1.115,"5min":1.115,"15min":1.115},"last":0.59699999999999998},{"interface":"front","average":{"1min":0.69999999999999996,"5min":0.69999999999999996,"15min":0.69999999999999996},"min":{"1min":0.495,"5min":0.495,"15min":0.495},"max":{"1min":1.0089999999999999,"5min":1.0089999999999999,"15min":1.0089999999999999},"last":0.56899999999999995}]},{"osd":2,"last update":"Mon Mar 9 14:32:44 2026","interfaces":[{"interface":"back","average":{"1min":0.61799999999999999,"5min":0.61799999999999999,"15min":0.61799999999999999},"min":{"1min":0.32800000000000001,"5min":0.32800000000000001,"15min":0.32800000000000001},"max":{"1min":1.0169999999999999,"5min":1.0169999999999999,"15min":1.0169999999999999},"last":0.65600000000000003},{"interface":"front","average":{"1min":0.73899999999999999,"5min":0.73899999999999999,"15min":0.73899999999999999},"min":{"1min":0.40500000000000003,"5min":0.40500000000000003,"15min":0.40500000000000003},"max":{"1min":1.2849999999999999,"5min":1.2849999999999999,"15min":1.2849999999999999},"last":1.2849999999999999}]},{"osd":3,"last update":"Mon Mar 9 14:32:44 2026","interfaces":[{"interface":"back","average":{"1min":0.75600000000000001,"5min":0.75600000000000001,"15min":0.75600000000000001},"min":{"1min":0.42399999999999999,"5min":0.42399999999999999,"15min":0.42399999999999999},"max":{"1min":1.4059999999999999,"5min":1.4059999999999999,"15min":1.4059999999999999},"last":1.405},{"interface":"front","average":{"1min":0.752,"5min":0.752,"15min":0.752},"min":{"1min":0.35799999999999998,"5min":0.35799999999999998,"15min":0.35799999999999998},"max":{"1min":1.1220000000000001,"5min":1.1220000000000001,"15min":1.1220000000000001},"last":0.72699999999999998}]},{"osd":4,"last update":"Mon Mar 9 14:32:44 2026","interfaces":[{"interface":"back","average":{"1min":0.58899999999999997,"5min":0.58899999999999997,"15min":0.58899999999999997},"min":{"1min":0.311,"5min":0.311,"15min":0.311},"max":{"1min":0.98199999999999998,"5min":0.98199999999999998,"15min":0.98199999999999998},"last":0.501},{"interface":"front","average":{"1min":0.56299999999999994,"5min":0.56299999999999994,"15min":0.56299999999999994},"min":{"1min":0.215,"5min":0.215,"15min":0.215},"max":{"1min":1.054,"5min":1.054,"15min":1.054},"last":0.623}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.51400000000000001}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.55900000000000005}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T14:32:47.113 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph pg dump --format=json 2026-03-09T14:32:48.664 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:48 vm07 bash[38490]: level=info ts=2026-03-09T14:32:48.233Z caller=cluster.go:688 component=cluster msg="gossip settled; proceeding" elapsed=10.006942844s 2026-03-09T14:32:48.735 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:49.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:48 vm07 bash[22585]: cluster 2026-03-09T14:32:47.007303+0000 mgr.y (mgr.24310) 32 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:49.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:48 vm07 bash[22585]: audit 2026-03-09T14:32:47.060354+0000 mgr.y (mgr.24310) 33 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:32:49.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:48 vm07 bash[17480]: cluster 2026-03-09T14:32:47.007303+0000 mgr.y (mgr.24310) 32 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:49.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:48 vm07 bash[17480]: audit 2026-03-09T14:32:47.060354+0000 mgr.y (mgr.24310) 33 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:32:49.170 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:32:49.173 INFO:teuthology.orchestra.run.vm07.stderr:dumped all 2026-03-09T14:32:49.227 INFO:teuthology.orchestra.run.vm07.stdout:{"pg_ready":true,"pg_map":{"version":16,"stamp":"2026-03-09T14:32:49.007423+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":49644,"kb_used_data":4908,"kb_used_omap":0,"kb_used_meta":44672,"kb_avail":167689748,"statfs":{"total":171765137408,"available":171714301952,"internally_reserved":0,"allocated":5025792,"data_stored":2740177,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":45744128},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001767"},"pg_stats":[{"pgid":"1.0","version":"50'87","reported_seq":56,"reported_epoch":50,"state":"active+clean","last_fresh":"2026-03-09T14:32:23.099860+0000","last_change":"2026-03-09T14:32:11.612038+0000","last_active":"2026-03-09T14:32:23.099860+0000","last_peered":"2026-03-09T14:32:23.099860+0000","last_clean":"2026-03-09T14:32:23.099860+0000","last_became_active":"2026-03-09T14:32:11.303998+0000","last_became_peered":"2026-03-09T14:32:11.303998+0000","last_unstale":"2026-03-09T14:32:23.099860+0000","last_undegraded":"2026-03-09T14:32:23.099860+0000","last_fullsized":"2026-03-09T14:32:23.099860+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:30:54.631862+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:30:54.631862+0000","last_clean_scrub_stamp":"2026-03-09T14:30:54.631862+0000","objects_scrubbed":0,"log_size":87,"ondisk_log_size":87,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:37:50.497527+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1204224,"data_stored":1193520,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":47,"seq":201863462920,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6180,"kb_used_data":860,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961244,"statfs":{"total":21470642176,"available":21464313856,"internally_reserved":0,"allocated":880640,"data_stored":591277,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.76200000000000001}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.72699999999999998}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.81200000000000006}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.84099999999999997}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.80700000000000005}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.73999999999999999}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.83599999999999997}]}]},{"osd":6,"up_from":41,"seq":176093659147,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6176,"kb_used_data":856,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961248,"statfs":{"total":21470642176,"available":21464317952,"internally_reserved":0,"allocated":876544,"data_stored":591033,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.875}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.749}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.70999999999999996}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.65000000000000002}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.79000000000000004}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.83299999999999996}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.75900000000000001}]}]},{"osd":1,"up_from":13,"seq":55834574874,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6424,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5952,"kb_avail":20961000,"statfs":{"total":21470642176,"available":21464064000,"internally_reserved":0,"allocated":475136,"data_stored":193193,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6094848},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 14:31:39 2026","interfaces":[{"interface":"back","average":{"1min":0.376,"5min":0.376,"15min":0.376},"min":{"1min":0.19,"5min":0.19,"15min":0.19},"max":{"1min":0.71199999999999997,"5min":0.71199999999999997,"15min":0.71199999999999997},"last":1.0489999999999999},{"interface":"front","average":{"1min":0.374,"5min":0.374,"15min":0.374},"min":{"1min":0.215,"5min":0.215,"15min":0.215},"max":{"1min":0.88500000000000001,"5min":0.88500000000000001,"15min":0.88500000000000001},"last":1.0009999999999999}]},{"osd":2,"last update":"Mon Mar 9 14:31:54 2026","interfaces":[{"interface":"back","average":{"1min":0.47399999999999998,"5min":0.47399999999999998,"15min":0.47399999999999998},"min":{"1min":0.25,"5min":0.25,"15min":0.25},"max":{"1min":0.89600000000000002,"5min":0.89600000000000002,"15min":0.89600000000000002},"last":0.98799999999999999},{"interface":"front","average":{"1min":0.40899999999999997,"5min":0.40899999999999997,"15min":0.40899999999999997},"min":{"1min":0.188,"5min":0.188,"15min":0.188},"max":{"1min":0.878,"5min":0.878,"15min":0.878},"last":0.96199999999999997}]},{"osd":3,"last update":"Mon Mar 9 14:32:12 2026","interfaces":[{"interface":"back","average":{"1min":0.56100000000000005,"5min":0.56100000000000005,"15min":0.56100000000000005},"min":{"1min":0.29599999999999999,"5min":0.29599999999999999,"15min":0.29599999999999999},"max":{"1min":0.95299999999999996,"5min":0.95299999999999996,"15min":0.95299999999999996},"last":0.38300000000000001},{"interface":"front","average":{"1min":0.54100000000000004,"5min":0.54100000000000004,"15min":0.54100000000000004},"min":{"1min":0.33200000000000002,"5min":0.33200000000000002,"15min":0.33200000000000002},"max":{"1min":0.92000000000000004,"5min":0.92000000000000004,"15min":0.92000000000000004},"last":1.1200000000000001}]},{"osd":4,"last update":"Mon Mar 9 14:32:25 2026","interfaces":[{"interface":"back","average":{"1min":0.64400000000000002,"5min":0.64400000000000002,"15min":0.64400000000000002},"min":{"1min":0.33900000000000002,"5min":0.33900000000000002,"15min":0.33900000000000002},"max":{"1min":1.101,"5min":1.101,"15min":1.101},"last":1.01},{"interface":"front","average":{"1min":0.86399999999999999,"5min":0.86399999999999999,"15min":0.86399999999999999},"min":{"1min":0.39200000000000002,"5min":0.39200000000000002,"15min":0.39200000000000002},"max":{"1min":4.9809999999999999,"5min":4.9809999999999999,"15min":4.9809999999999999},"last":1.042}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.0169999999999999}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.079}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.93700000000000006}]}]},{"osd":0,"up_from":8,"seq":34359738397,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6884,"kb_used_data":860,"kb_used_omap":0,"kb_used_meta":6016,"kb_avail":20960540,"statfs":{"total":21470642176,"available":21463592960,"internally_reserved":0,"allocated":880640,"data_stored":591277,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6160384},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Mon Mar 9 14:31:43 2026","interfaces":[{"interface":"back","average":{"1min":0.46500000000000002,"5min":0.46500000000000002,"15min":0.46500000000000002},"min":{"1min":0.193,"5min":0.193,"15min":0.193},"max":{"1min":1.387,"5min":1.387,"15min":1.387},"last":0.80100000000000005},{"interface":"front","average":{"1min":0.38400000000000001,"5min":0.38400000000000001,"15min":0.38400000000000001},"min":{"1min":0.17699999999999999,"5min":0.17699999999999999,"15min":0.17699999999999999},"max":{"1min":0.96799999999999997,"5min":0.96799999999999997,"15min":0.96799999999999997},"last":0.49399999999999999}]},{"osd":2,"last update":"Mon Mar 9 14:31:54 2026","interfaces":[{"interface":"back","average":{"1min":0.434,"5min":0.434,"15min":0.434},"min":{"1min":0.219,"5min":0.219,"15min":0.219},"max":{"1min":0.754,"5min":0.754,"15min":0.754},"last":0.50800000000000001},{"interface":"front","average":{"1min":0.51900000000000002,"5min":0.51900000000000002,"15min":0.51900000000000002},"min":{"1min":0.224,"5min":0.224,"15min":0.224},"max":{"1min":1.4570000000000001,"5min":1.4570000000000001,"15min":1.4570000000000001},"last":0.52900000000000003}]},{"osd":3,"last update":"Mon Mar 9 14:32:11 2026","interfaces":[{"interface":"back","average":{"1min":0.56200000000000006,"5min":0.56200000000000006,"15min":0.56200000000000006},"min":{"1min":0.32600000000000001,"5min":0.32600000000000001,"15min":0.32600000000000001},"max":{"1min":1.153,"5min":1.153,"15min":1.153},"last":0.66000000000000003},{"interface":"front","average":{"1min":0.61499999999999999,"5min":0.61499999999999999,"15min":0.61499999999999999},"min":{"1min":0.28499999999999998,"5min":0.28499999999999998,"15min":0.28499999999999998},"max":{"1min":0.93300000000000005,"5min":0.93300000000000005,"15min":0.93300000000000005},"last":0.60499999999999998}]},{"osd":4,"last update":"Mon Mar 9 14:32:27 2026","interfaces":[{"interface":"back","average":{"1min":0.70299999999999996,"5min":0.70299999999999996,"15min":0.70299999999999996},"min":{"1min":0.502,"5min":0.502,"15min":0.502},"max":{"1min":1.4079999999999999,"5min":1.4079999999999999,"15min":1.4079999999999999},"last":0.82399999999999995},{"interface":"front","average":{"1min":0.80500000000000005,"5min":0.80500000000000005,"15min":0.80500000000000005},"min":{"1min":0.47999999999999998,"5min":0.47999999999999998,"15min":0.47999999999999998},"max":{"1min":1.756,"5min":1.756,"15min":1.756},"last":0.59599999999999997}]},{"osd":5,"last update":"Mon Mar 9 14:32:40 2026","interfaces":[{"interface":"back","average":{"1min":0.78900000000000003,"5min":0.78900000000000003,"15min":0.78900000000000003},"min":{"1min":0.42299999999999999,"5min":0.42299999999999999,"15min":0.42299999999999999},"max":{"1min":1.4530000000000001,"5min":1.4530000000000001,"15min":1.4530000000000001},"last":0.78600000000000003},{"interface":"front","average":{"1min":0.77900000000000003,"5min":0.77900000000000003,"15min":0.77900000000000003},"min":{"1min":0.42399999999999999,"5min":0.42399999999999999,"15min":0.42399999999999999},"max":{"1min":1.3600000000000001,"5min":1.3600000000000001,"15min":1.3600000000000001},"last":0.77300000000000002}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.70099999999999996}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.81499999999999995}]}]},{"osd":2,"up_from":18,"seq":77309411351,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6364,"kb_used_data":468,"kb_used_omap":0,"kb_used_meta":5888,"kb_avail":20961060,"statfs":{"total":21470642176,"available":21464125440,"internally_reserved":0,"allocated":479232,"data_stored":193437,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6029312},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 14:31:54 2026","interfaces":[{"interface":"back","average":{"1min":0.39300000000000002,"5min":0.39300000000000002,"15min":0.39300000000000002},"min":{"1min":0.158,"5min":0.158,"15min":0.158},"max":{"1min":0.73599999999999999,"5min":0.73599999999999999,"15min":0.73599999999999999},"last":0.48499999999999999},{"interface":"front","average":{"1min":0.50800000000000001,"5min":0.50800000000000001,"15min":0.50800000000000001},"min":{"1min":0.27700000000000002,"5min":0.27700000000000002,"15min":0.27700000000000002},"max":{"1min":1.042,"5min":1.042,"15min":1.042},"last":0.51200000000000001}]},{"osd":1,"last update":"Mon Mar 9 14:31:54 2026","interfaces":[{"interface":"back","average":{"1min":0.5,"5min":0.5,"15min":0.5},"min":{"1min":0.29699999999999999,"5min":0.29699999999999999,"15min":0.29699999999999999},"max":{"1min":1.0629999999999999,"5min":1.0629999999999999,"15min":1.0629999999999999},"last":0.441},{"interface":"front","average":{"1min":0.48699999999999999,"5min":0.48699999999999999,"15min":0.48699999999999999},"min":{"1min":0.315,"5min":0.315,"15min":0.315},"max":{"1min":0.89100000000000001,"5min":0.89100000000000001,"15min":0.89100000000000001},"last":0.49299999999999999}]},{"osd":3,"last update":"Mon Mar 9 14:32:10 2026","interfaces":[{"interface":"back","average":{"1min":0.60699999999999998,"5min":0.60699999999999998,"15min":0.60699999999999998},"min":{"1min":0.30599999999999999,"5min":0.30599999999999999,"15min":0.30599999999999999},"max":{"1min":1.244,"5min":1.244,"15min":1.244},"last":0.68000000000000005},{"interface":"front","average":{"1min":0.54100000000000004,"5min":0.54100000000000004,"15min":0.54100000000000004},"min":{"1min":0.311,"5min":0.311,"15min":0.311},"max":{"1min":0.755,"5min":0.755,"15min":0.755},"last":0.625}]},{"osd":4,"last update":"Mon Mar 9 14:32:26 2026","interfaces":[{"interface":"back","average":{"1min":0.69499999999999995,"5min":0.69499999999999995,"15min":0.69499999999999995},"min":{"1min":0.32400000000000001,"5min":0.32400000000000001,"15min":0.32400000000000001},"max":{"1min":1.169,"5min":1.169,"15min":1.169},"last":0.66200000000000003},{"interface":"front","average":{"1min":0.71699999999999997,"5min":0.71699999999999997,"15min":0.71699999999999997},"min":{"1min":0.38700000000000001,"5min":0.38700000000000001,"15min":0.38700000000000001},"max":{"1min":1.2430000000000001,"5min":1.2430000000000001,"15min":1.2430000000000001},"last":0.505}]},{"osd":5,"last update":"Mon Mar 9 14:32:41 2026","interfaces":[{"interface":"back","average":{"1min":0.72499999999999998,"5min":0.72499999999999998,"15min":0.72499999999999998},"min":{"1min":0.40200000000000002,"5min":0.40200000000000002,"15min":0.40200000000000002},"max":{"1min":1.2609999999999999,"5min":1.2609999999999999,"15min":1.2609999999999999},"last":0.52000000000000002},{"interface":"front","average":{"1min":0.70799999999999996,"5min":0.70799999999999996,"15min":0.70799999999999996},"min":{"1min":0.46000000000000002,"5min":0.46000000000000002,"15min":0.46000000000000002},"max":{"1min":1.2090000000000001,"5min":1.2090000000000001,"15min":1.2090000000000001},"last":0.46000000000000002}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.47499999999999998}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.42799999999999999}]}]},{"osd":3,"up_from":25,"seq":107374182421,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5916,"kb_used_data":468,"kb_used_omap":0,"kb_used_meta":5440,"kb_avail":20961508,"statfs":{"total":21470642176,"available":21464584192,"internally_reserved":0,"allocated":479232,"data_stored":193437,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5570560},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 14:32:12 2026","interfaces":[{"interface":"back","average":{"1min":0.48199999999999998,"5min":0.48199999999999998,"15min":0.48199999999999998},"min":{"1min":0.27900000000000003,"5min":0.27900000000000003,"15min":0.27900000000000003},"max":{"1min":0.77000000000000002,"5min":0.77000000000000002,"15min":0.77000000000000002},"last":0.49199999999999999},{"interface":"front","average":{"1min":0.502,"5min":0.502,"15min":0.502},"min":{"1min":0.33300000000000002,"5min":0.33300000000000002,"15min":0.33300000000000002},"max":{"1min":1.0649999999999999,"5min":1.0649999999999999,"15min":1.0649999999999999},"last":0.68300000000000005}]},{"osd":1,"last update":"Mon Mar 9 14:32:12 2026","interfaces":[{"interface":"back","average":{"1min":0.60399999999999998,"5min":0.60399999999999998,"15min":0.60399999999999998},"min":{"1min":0.33500000000000002,"5min":0.33500000000000002,"15min":0.33500000000000002},"max":{"1min":0.94999999999999996,"5min":0.94999999999999996,"15min":0.94999999999999996},"last":0.45900000000000002},{"interface":"front","average":{"1min":0.60699999999999998,"5min":0.60699999999999998,"15min":0.60699999999999998},"min":{"1min":0.315,"5min":0.315,"15min":0.315},"max":{"1min":0.79500000000000004,"5min":0.79500000000000004,"15min":0.79500000000000004},"last":0.5}]},{"osd":2,"last update":"Mon Mar 9 14:32:12 2026","interfaces":[{"interface":"back","average":{"1min":0.55300000000000005,"5min":0.55300000000000005,"15min":0.55300000000000005},"min":{"1min":0.29599999999999999,"5min":0.29599999999999999,"15min":0.29599999999999999},"max":{"1min":0.80900000000000005,"5min":0.80900000000000005,"15min":0.80900000000000005},"last":0.629},{"interface":"front","average":{"1min":0.51400000000000001,"5min":0.51400000000000001,"15min":0.51400000000000001},"min":{"1min":0.32800000000000001,"5min":0.32800000000000001,"15min":0.32800000000000001},"max":{"1min":0.71799999999999997,"5min":0.71799999999999997,"15min":0.71799999999999997},"last":0.79400000000000004}]},{"osd":4,"last update":"Mon Mar 9 14:32:26 2026","interfaces":[{"interface":"back","average":{"1min":0.70299999999999996,"5min":0.70299999999999996,"15min":0.70299999999999996},"min":{"1min":0.437,"5min":0.437,"15min":0.437},"max":{"1min":1.032,"5min":1.032,"15min":1.032},"last":0.435},{"interface":"front","average":{"1min":0.71799999999999997,"5min":0.71799999999999997,"15min":0.71799999999999997},"min":{"1min":0.45100000000000001,"5min":0.45100000000000001,"15min":0.45100000000000001},"max":{"1min":1.1619999999999999,"5min":1.1619999999999999,"15min":1.1619999999999999},"last":0.61899999999999999}]},{"osd":5,"last update":"Mon Mar 9 14:32:41 2026","interfaces":[{"interface":"back","average":{"1min":0.73399999999999999,"5min":0.73399999999999999,"15min":0.73399999999999999},"min":{"1min":0.46200000000000002,"5min":0.46200000000000002,"15min":0.46200000000000002},"max":{"1min":1.629,"5min":1.629,"15min":1.629},"last":0.72399999999999998},{"interface":"front","average":{"1min":0.73299999999999998,"5min":0.73299999999999998,"15min":0.73299999999999998},"min":{"1min":0.39500000000000002,"5min":0.39500000000000002,"15min":0.39500000000000002},"max":{"1min":1.595,"5min":1.595,"15min":1.595},"last":0.44900000000000001}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.63900000000000001}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.76900000000000002}]}]},{"osd":4,"up_from":30,"seq":128849018897,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5852,"kb_used_data":468,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961572,"statfs":{"total":21470642176,"available":21464649728,"internally_reserved":0,"allocated":479232,"data_stored":193437,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 14:32:28 2026","interfaces":[{"interface":"back","average":{"1min":0.64900000000000002,"5min":0.64900000000000002,"15min":0.64900000000000002},"min":{"1min":0.32000000000000001,"5min":0.32000000000000001,"15min":0.32000000000000001},"max":{"1min":1.2849999999999999,"5min":1.2849999999999999,"15min":1.2849999999999999},"last":0.41099999999999998},{"interface":"front","average":{"1min":0.626,"5min":0.626,"15min":0.626},"min":{"1min":0.42199999999999999,"5min":0.42199999999999999,"15min":0.42199999999999999},"max":{"1min":0.98699999999999999,"5min":0.98699999999999999,"15min":0.98699999999999999},"last":0.68899999999999995}]},{"osd":1,"last update":"Mon Mar 9 14:32:28 2026","interfaces":[{"interface":"back","average":{"1min":0.61299999999999999,"5min":0.61299999999999999,"15min":0.61299999999999999},"min":{"1min":0.30599999999999999,"5min":0.30599999999999999,"15min":0.30599999999999999},"max":{"1min":0.82099999999999995,"5min":0.82099999999999995,"15min":0.82099999999999995},"last":0.755},{"interface":"front","average":{"1min":0.71999999999999997,"5min":0.71999999999999997,"15min":0.71999999999999997},"min":{"1min":0.33700000000000002,"5min":0.33700000000000002,"15min":0.33700000000000002},"max":{"1min":1.5169999999999999,"5min":1.5169999999999999,"15min":1.5169999999999999},"last":0.41899999999999998}]},{"osd":2,"last update":"Mon Mar 9 14:32:28 2026","interfaces":[{"interface":"back","average":{"1min":0.74199999999999999,"5min":0.74199999999999999,"15min":0.74199999999999999},"min":{"1min":0.33000000000000002,"5min":0.33000000000000002,"15min":0.33000000000000002},"max":{"1min":1.5049999999999999,"5min":1.5049999999999999,"15min":1.5049999999999999},"last":0.502},{"interface":"front","average":{"1min":0.64800000000000002,"5min":0.64800000000000002,"15min":0.64800000000000002},"min":{"1min":0.41799999999999998,"5min":0.41799999999999998,"15min":0.41799999999999998},"max":{"1min":1.085,"5min":1.085,"15min":1.085},"last":0.69899999999999995}]},{"osd":3,"last update":"Mon Mar 9 14:32:28 2026","interfaces":[{"interface":"back","average":{"1min":0.63,"5min":0.63,"15min":0.63},"min":{"1min":0.39800000000000002,"5min":0.39800000000000002,"15min":0.39800000000000002},"max":{"1min":0.98599999999999999,"5min":0.98599999999999999,"15min":0.98599999999999999},"last":0.74099999999999999},{"interface":"front","average":{"1min":0.60799999999999998,"5min":0.60799999999999998,"15min":0.60799999999999998},"min":{"1min":0.32300000000000001,"5min":0.32300000000000001,"15min":0.32300000000000001},"max":{"1min":0.94799999999999995,"5min":0.94799999999999995,"15min":0.94799999999999995},"last":0.77100000000000002}]},{"osd":5,"last update":"Mon Mar 9 14:32:41 2026","interfaces":[{"interface":"back","average":{"1min":0.59699999999999998,"5min":0.59699999999999998,"15min":0.59699999999999998},"min":{"1min":0.30099999999999999,"5min":0.30099999999999999,"15min":0.30099999999999999},"max":{"1min":1.002,"5min":1.002,"15min":1.002},"last":0.73199999999999998},{"interface":"front","average":{"1min":0.57899999999999996,"5min":0.57899999999999996,"15min":0.57899999999999996},"min":{"1min":0.36899999999999999,"5min":0.36899999999999999,"15min":0.36899999999999999},"max":{"1min":0.97899999999999998,"5min":0.97899999999999998,"15min":0.97899999999999998},"last":0.78100000000000003}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.39700000000000002}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.76400000000000001}]}]},{"osd":5,"up_from":36,"seq":154618822670,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5848,"kb_used_data":464,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961576,"statfs":{"total":21470642176,"available":21464653824,"internally_reserved":0,"allocated":475136,"data_stored":193086,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Mon Mar 9 14:32:44 2026","interfaces":[{"interface":"back","average":{"1min":0.628,"5min":0.628,"15min":0.628},"min":{"1min":0.30499999999999999,"5min":0.30499999999999999,"15min":0.30499999999999999},"max":{"1min":1.2190000000000001,"5min":1.2190000000000001,"15min":1.2190000000000001},"last":1.2190000000000001},{"interface":"front","average":{"1min":0.63,"5min":0.63,"15min":0.63},"min":{"1min":0.28299999999999997,"5min":0.28299999999999997,"15min":0.28299999999999997},"max":{"1min":1.0049999999999999,"5min":1.0049999999999999,"15min":1.0049999999999999},"last":0.64100000000000001}]},{"osd":1,"last update":"Mon Mar 9 14:32:44 2026","interfaces":[{"interface":"back","average":{"1min":0.69999999999999996,"5min":0.69999999999999996,"15min":0.69999999999999996},"min":{"1min":0.50900000000000001,"5min":0.50900000000000001,"15min":0.50900000000000001},"max":{"1min":1.115,"5min":1.115,"15min":1.115},"last":0.59699999999999998},{"interface":"front","average":{"1min":0.69999999999999996,"5min":0.69999999999999996,"15min":0.69999999999999996},"min":{"1min":0.495,"5min":0.495,"15min":0.495},"max":{"1min":1.0089999999999999,"5min":1.0089999999999999,"15min":1.0089999999999999},"last":0.56899999999999995}]},{"osd":2,"last update":"Mon Mar 9 14:32:44 2026","interfaces":[{"interface":"back","average":{"1min":0.61799999999999999,"5min":0.61799999999999999,"15min":0.61799999999999999},"min":{"1min":0.32800000000000001,"5min":0.32800000000000001,"15min":0.32800000000000001},"max":{"1min":1.0169999999999999,"5min":1.0169999999999999,"15min":1.0169999999999999},"last":0.65600000000000003},{"interface":"front","average":{"1min":0.73899999999999999,"5min":0.73899999999999999,"15min":0.73899999999999999},"min":{"1min":0.40500000000000003,"5min":0.40500000000000003,"15min":0.40500000000000003},"max":{"1min":1.2849999999999999,"5min":1.2849999999999999,"15min":1.2849999999999999},"last":1.2849999999999999}]},{"osd":3,"last update":"Mon Mar 9 14:32:44 2026","interfaces":[{"interface":"back","average":{"1min":0.75600000000000001,"5min":0.75600000000000001,"15min":0.75600000000000001},"min":{"1min":0.42399999999999999,"5min":0.42399999999999999,"15min":0.42399999999999999},"max":{"1min":1.4059999999999999,"5min":1.4059999999999999,"15min":1.4059999999999999},"last":1.405},{"interface":"front","average":{"1min":0.752,"5min":0.752,"15min":0.752},"min":{"1min":0.35799999999999998,"5min":0.35799999999999998,"15min":0.35799999999999998},"max":{"1min":1.1220000000000001,"5min":1.1220000000000001,"15min":1.1220000000000001},"last":0.72699999999999998}]},{"osd":4,"last update":"Mon Mar 9 14:32:44 2026","interfaces":[{"interface":"back","average":{"1min":0.58899999999999997,"5min":0.58899999999999997,"15min":0.58899999999999997},"min":{"1min":0.311,"5min":0.311,"15min":0.311},"max":{"1min":0.98199999999999998,"5min":0.98199999999999998,"15min":0.98199999999999998},"last":0.501},{"interface":"front","average":{"1min":0.56299999999999994,"5min":0.56299999999999994,"15min":0.56299999999999994},"min":{"1min":0.215,"5min":0.215,"15min":0.215},"max":{"1min":1.054,"5min":1.054,"15min":1.054},"last":0.623}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.51400000000000001}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.55900000000000005}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T14:32:49.228 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T14:32:49.228 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T14:32:49.228 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T14:32:49.228 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph health --format=json 2026-03-09T14:32:49.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:48 vm11 bash[17885]: cluster 2026-03-09T14:32:47.007303+0000 mgr.y (mgr.24310) 32 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:49.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:48 vm11 bash[17885]: audit 2026-03-09T14:32:47.060354+0000 mgr.y (mgr.24310) 33 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:32:49.943 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:49 vm11 bash[17885]: cluster 2026-03-09T14:32:49.007552+0000 mgr.y (mgr.24310) 34 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:50.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:49 vm07 bash[22585]: cluster 2026-03-09T14:32:49.007552+0000 mgr.y (mgr.24310) 34 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:50.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:49 vm07 bash[22585]: audit 2026-03-09T14:32:49.168668+0000 mgr.y (mgr.24310) 35 : audit [DBG] from='client.14547 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:32:50.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:49 vm07 bash[17480]: cluster 2026-03-09T14:32:49.007552+0000 mgr.y (mgr.24310) 34 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:50.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:49 vm07 bash[17480]: audit 2026-03-09T14:32:49.168668+0000 mgr.y (mgr.24310) 35 : audit [DBG] from='client.14547 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:32:50.227 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:49 vm11 bash[17885]: audit 2026-03-09T14:32:49.168668+0000 mgr.y (mgr.24310) 35 : audit [DBG] from='client.14547 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:32:50.841 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.841 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.842 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.842 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.842 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.842 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.842 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.842 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:50.842 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:50 vm11 systemd[1]: Started Ceph grafana.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="The state of unified alerting is still not defined. The decision will be made during as we run the database migrations" logger=settings 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=warn msg="falling back to legacy setting of 'min_interval_seconds'; please use the configuration option in the `unified_alerting` section if Grafana 8 alerts are enabled." logger=settings 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="App mode production" logger=settings 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=warn msg="SQLite database file has broader permissions than it should" logger=sqlstore path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T14:32:51.092 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Starting DB migrations" logger=migrator 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create migration_log table" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create user table" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user.login" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user.email" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_user_login - v1" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_user_email - v1" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table user to user_v1 - v1" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create user table v2" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_user_login - v2" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_user_email - v2" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="copy data_source v1 to v2" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table user_v1" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column help_flags1 to user table" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update user table charset" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add last_seen_at column to user" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add missing user data" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add is_disabled column to user" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index user.login/user.email" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add is_service_account column to user" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create temp user table v1-7" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_email - v1-7" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_org_id - v1-7" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_code - v1-7" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_status - v1-7" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update temp_user table charset" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_email - v1" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_org_id - v1" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_code - v1" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_status - v1" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create temp_user v2" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_email - v2" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_org_id - v2" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_code - v2" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_status - v2" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="copy temp_user v1 to v2" 2026-03-09T14:32:51.093 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop temp_user_tmp_qwerty" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Set created for temp users that will otherwise prematurely expire" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create star table" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index star.user_id_dashboard_id" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create org table v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_org_name - v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create org_user table v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_org_user_org_id - v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_org_user_org_id_user_id - v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_org_user_user_id - v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update org table charset" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update org_user table charset" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Migrate all Read Only Viewers to Viewers" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard table" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard.account_id" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_account_id_slug" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_tag table" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_tag.dasboard_id_term" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table dashboard to dashboard_v1 - v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard v2" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_org_id - v2" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_org_id_slug - v2" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="copy dashboard v1 to v2" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard.data to mediumtext v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column updated_by in dashboard - v2" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column created_by in dashboard - v2" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column gnetId in dashboard" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for gnetId in dashboard" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column plugin_id in dashboard" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for plugin_id in dashboard" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_id in dashboard_tag" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard table charset" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard_tag table charset" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column folder_id in dashboard" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column isFolder in dashboard" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column has_acl in dashboard" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column uid in dashboard" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid column values in dashboard" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index dashboard_org_id_uid" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index org_id_slug" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard title length" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_provisioning" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_provisioning v2" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="copy dashboard_provisioning v1 to v2" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop dashboard_provisioning_tmp_qwerty" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add check_sum column" 2026-03-09T14:32:51.346 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_title" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="delete tags for deleted dashboards" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="delete stars for deleted dashboards" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_is_folder" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create data_source table" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index data_source.account_id" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index data_source.account_id_name" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_data_source_account_id - v1" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_data_source_account_id_name - v1" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table data_source to data_source_v1 - v1" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create data_source table v2" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_data_source_org_id - v2" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_data_source_org_id_name - v2" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="copy data_source v1 to v2" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table data_source_v1 #2" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column with_credentials" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add secure json data column" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update data_source table charset" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update initial version to 1" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add read_only data column" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Migrate logging ds to loki ds" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update json_data with nulls" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add uid column" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid value" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index datasource_org_id_uid" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index datasource_org_id_is_default" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create api_key table" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.account_id" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.key" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.account_id_name" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_api_key_account_id - v1" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_api_key_key - v1" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_api_key_account_id_name - v1" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table api_key to api_key_v1 - v1" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create api_key table v2" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_api_key_org_id - v2" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_key - v2" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_org_id_name - v2" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="copy api_key v1 to v2" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table api_key_v1" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update api_key table charset" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add expires to api_key table" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add service account foreign key" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v4" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_snapshot_v4 #1" 2026-03-09T14:32:51.347 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v5 #2" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_snapshot_key - v5" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard_snapshot to mediumtext v2" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard_snapshot table charset" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column external_delete_url to dashboard_snapshots table" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add encrypted dashboard json column" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create quota table v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update quota table charset" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create plugin_setting table" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column plugin_version to plugin_settings" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update plugin_setting table charset" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create session table" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table playlist table" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table playlist_item table" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist table v2" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist item table v2" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update playlist table charset" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update playlist_item table charset" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v2" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v3" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create preferences table v3" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update preferences table charset" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column team_id in preferences" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update team_id column values in preferences" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column week_start in preferences" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create alert table v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert org_id & id " 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert state" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert dashboard_id" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Create alert_rule_tag table v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Create alert_rule_tag table v2" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="copy alert_rule_tag v1 to v2" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop table alert_rule_tag_v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_notification table v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column is_default" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column frequency" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column send_reminder" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column disable_resolve_message" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert_notification org_id & name" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert table charset" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert_notification table charset" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create notification_journal table v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_notification_journal" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_notification_state table v1" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add for to alert table" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column uid in alert_notification" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid column values in alert_notification" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index alert_notification_org_id_uid" 2026-03-09T14:32:51.348 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index org_id_name" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column secure_settings in alert_notification" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert.settings to mediumtext" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add non-unique index alert_notification_state_alert_id" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add non-unique index alert_rule_tag_alert_id" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old annotation table v4" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create annotation table v5" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 0 v3" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 1 v3" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 2 v3" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 3 v3" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 4 v3" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update annotation table charset" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column region_id to annotation table" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Drop category_id index" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column tags to annotation table" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Create annotation_tag table v2" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Create annotation_tag table v3" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="copy annotation_tag v2 to v3" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop table annotation_tag_v2" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert annotations and set TEXT to empty" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add created time to annotation table" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add updated time to annotation table" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for created in annotation table" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for updated in annotation table" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Convert existing annotations from seconds to milliseconds" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add epoch_end column" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for epoch_end" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Make epoch_end the same as epoch" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Move region to single row" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_epoch from annotation table" 2026-03-09T14:32:51.349 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-09T14:32:51.595 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for alert_id on annotation table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create test_data table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_version table v1" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_version.dashboard_id" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Set dashboard version to 1 where 0" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="save existing dashboard data in dashboard_version table v1" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard_version.data to mediumtext v1" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create team table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index team.org_id" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_org_id_name" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create team member table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_member.org_id" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_member_org_id_team_id_user_id" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_member.team_id" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column email to team table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column external to team_member table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column permission to team_member table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard acl table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_dashboard_id" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_user_id" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_team_id" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_org_id_role" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_permission" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="save default acl rules in dashboard_acl table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="delete acl rules for deleted dashboards and folders" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create tag table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index tag.key_value" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create login attempt table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index login_attempt.username" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_login_attempt_username - v1" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create login_attempt v2" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_login_attempt_username - v2" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="copy login_attempt v1 to v2" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop login_attempt_tmp_qwerty" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create user auth table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="alter user_auth.auth_id to length 190" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth access token to user_auth" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth refresh token to user_auth" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth token type to user_auth" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth expiry to user_auth" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add index to user_id column in user_auth" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create server_lock table" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index server_lock.operation_uid" 2026-03-09T14:32:51.596 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create user auth token table" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_auth_token.auth_token" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_auth_token.prev_auth_token" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_auth_token.user_id" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add revoked_at to the user auth token" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create cache_data table" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index cache_data.cache_key" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create short_url table v1" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index short_url.org_id-uid" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="delete alert_definition table" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="recreate alert_definition table" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition on org_id and title columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition on org_id and uid columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_definition table data column to mediumtext in mysql" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index in alert_definition on org_id and title columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop index in alert_definition on org_id and uid columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index in alert_definition on org_id and title columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index in alert_definition on org_id and uid columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column paused in alert_definition" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_definition table" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="delete alert_definition_version table" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="recreate alert_definition_version table" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_definition_version table" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_instance table" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add column current_state_end to alert_instance" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="remove index def_org_id, current_state on alert_instance" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="rename def_org_id to rule_org_id in alert_instance" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="rename def_uid to rule_uid in alert_instance" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index rule_org_id, current_state on alert_instance" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_rule table" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id and title columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id and uid columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_rule table data column to mediumtext in mysql" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add column for to alert_rule" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add column annotations to alert_rule" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add column labels to alert_rule" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="remove unique index from alert_rule on org_id, title columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add dashboard_uid column to alert_rule" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add panel_id column to alert_rule" 2026-03-09T14:32:51.597 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_rule_version table" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add column for to alert_rule_version" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add column annotations to alert_rule_version" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add column labels to alert_rule_version" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id=create_alert_configuration_table 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column default in alert_configuration" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add column org_id in alert_configuration" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_configuration table on org_id column" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id=create_ngalert_configuration_table 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index in ngalert_configuration on org_id column" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="clear migration entry \"remove unified alerting data\"" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="move dashboard alerts to unified alerting" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create library_element table v1" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index library_element org_id-folder_id-name-kind" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create library_element_connection table v1" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index library_element_connection element_id-kind-connection_id" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index library_element org_id_uid" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="clone move dashboard alerts to unified alerting" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create data_keys table" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create kv_store table v1" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index kv_store.org_id-namespace-key" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="update dashboard_uid and panel_id from existing annotations" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create permission table" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index permission.role_id" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role_id_action_scope" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create role table" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add column display_name" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add column group_name" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index role.org_id" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role_org_id_name" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index role_org_id_uid" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create team role table" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_role.org_id" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_role_org_id_team_id_role_id" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_role.team_id" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create user role table" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_role.org_id" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_role_org_id_user_id_role_id" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_role.user_id" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create builtin role table" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.role_id" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.name" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Add column org_id to builtin_role table" 2026-03-09T14:32:51.598 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.org_id" 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index builtin_role_org_id_role_id_role" 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index role_org_id_uid" 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role.uid" 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="create seed assignment table" 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index builtin_role_role_name" 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="migrations completed" logger=migrator performed=381 skipped=0 duration=525.428242ms 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Created default organization" logger=sqlstore 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Initialising plugins" logger=plugin.manager 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=input 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=vonage-status-panel 2026-03-09T14:32:51.599 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=grafana-piechart-panel 2026-03-09T14:32:51.847 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/mon.c/config 2026-03-09T14:32:51.877 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="Live Push Gateway initialization" logger=live.push_http 2026-03-09T14:32:51.877 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=warn msg="[Deprecated] the datasource provisioning config is outdated. please upgrade" logger=provisioning.datasources filename=/etc/grafana/provisioning/datasources/ceph-dashboard.yml 2026-03-09T14:32:51.877 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T14:32:51.877 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T14:32:51.877 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="warming cache for startup" logger=ngalert 2026-03-09T14:32:51.877 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:32:51 vm11 bash[33410]: t=2026-03-09T14:32:51+0000 lvl=info msg="starting MultiOrg Alertmanager" logger=ngalert.multiorg.alertmanager 2026-03-09T14:32:52.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:51 vm07 bash[17480]: audit 2026-03-09T14:32:50.866557+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:52.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:51 vm07 bash[17480]: audit 2026-03-09T14:32:50.869210+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:52.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:51 vm07 bash[17480]: audit 2026-03-09T14:32:50.870309+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:52.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:51 vm07 bash[17480]: audit 2026-03-09T14:32:50.871057+0000 mon.b (mon.2) 58 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:52.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:51 vm07 bash[17480]: cluster 2026-03-09T14:32:51.007844+0000 mgr.y (mgr.24310) 36 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:52.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:51 vm07 bash[22585]: audit 2026-03-09T14:32:50.866557+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:52.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:51 vm07 bash[22585]: audit 2026-03-09T14:32:50.869210+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:52.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:51 vm07 bash[22585]: audit 2026-03-09T14:32:50.870309+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:52.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:51 vm07 bash[22585]: audit 2026-03-09T14:32:50.871057+0000 mon.b (mon.2) 58 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:52.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:51 vm07 bash[22585]: cluster 2026-03-09T14:32:51.007844+0000 mgr.y (mgr.24310) 36 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:52.221 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-09T14:32:52.221 INFO:teuthology.orchestra.run.vm07.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T14:32:52.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:51 vm11 bash[17885]: audit 2026-03-09T14:32:50.866557+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:52.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:51 vm11 bash[17885]: audit 2026-03-09T14:32:50.869210+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:32:52.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:51 vm11 bash[17885]: audit 2026-03-09T14:32:50.870309+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:52.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:51 vm11 bash[17885]: audit 2026-03-09T14:32:50.871057+0000 mon.b (mon.2) 58 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:52.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:51 vm11 bash[17885]: cluster 2026-03-09T14:32:51.007844+0000 mgr.y (mgr.24310) 36 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:52.270 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T14:32:52.271 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T14:32:52.271 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:32:52.273 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm07.local 2026-03-09T14:32:52.273 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- bash -c 'ceph config set mgr mgr/cephadm/use_repo_digest false --force' 2026-03-09T14:32:52.770 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:32:52.772 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm07.local 2026-03-09T14:32:52.772 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin realm create --rgw-realm=r --default' 2026-03-09T14:32:52.778 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:32:52 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:32:52] "GET /metrics HTTP/1.1" 200 191122 "" "Prometheus/2.33.4" 2026-03-09T14:32:53.080 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:52 vm07 bash[22585]: audit 2026-03-09T14:32:52.220815+0000 mon.a (mon.0) 611 : audit [DBG] from='client.? 192.168.123.107:0/2519116455' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:32:53.081 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:52 vm07 bash[22585]: audit 2026-03-09T14:32:52.711598+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.admin' 2026-03-09T14:32:53.081 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:52 vm07 bash[17480]: audit 2026-03-09T14:32:52.220815+0000 mon.a (mon.0) 611 : audit [DBG] from='client.? 192.168.123.107:0/2519116455' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:32:53.081 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:52 vm07 bash[17480]: audit 2026-03-09T14:32:52.711598+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.admin' 2026-03-09T14:32:53.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:52 vm11 bash[17885]: audit 2026-03-09T14:32:52.220815+0000 mon.a (mon.0) 611 : audit [DBG] from='client.? 192.168.123.107:0/2519116455' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:32:53.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:52 vm11 bash[17885]: audit 2026-03-09T14:32:52.711598+0000 mon.a (mon.0) 612 : audit [INF] from='client.? ' entity='client.admin' 2026-03-09T14:32:53.758 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:32:53 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:32:53] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:32:54.244 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:54 vm07 bash[22585]: cluster 2026-03-09T14:32:53.008115+0000 mgr.y (mgr.24310) 37 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:54.244 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:54 vm07 bash[22585]: audit 2026-03-09T14:32:53.113770+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.244 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:54 vm07 bash[22585]: audit 2026-03-09T14:32:53.353780+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.244 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:54 vm07 bash[22585]: audit 2026-03-09T14:32:53.936731+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.244 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:54 vm07 bash[22585]: audit 2026-03-09T14:32:53.944080+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.244 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[17480]: cluster 2026-03-09T14:32:53.008115+0000 mgr.y (mgr.24310) 37 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:54.244 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[17480]: audit 2026-03-09T14:32:53.113770+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.244 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[17480]: audit 2026-03-09T14:32:53.353780+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.244 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[17480]: audit 2026-03-09T14:32:53.936731+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.244 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[17480]: audit 2026-03-09T14:32:53.944080+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:54 vm11 bash[17885]: cluster 2026-03-09T14:32:53.008115+0000 mgr.y (mgr.24310) 37 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:54.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:54 vm11 bash[17885]: audit 2026-03-09T14:32:53.113770+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:54 vm11 bash[17885]: audit 2026-03-09T14:32:53.353780+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:54 vm11 bash[17885]: audit 2026-03-09T14:32:53.936731+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:54 vm11 bash[17885]: audit 2026-03-09T14:32:53.944080+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:54.516 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 systemd[1]: Stopping Ceph alertmanager.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:32:54.517 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42544]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-alertmanager.a 2026-03-09T14:32:54.517 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[38490]: level=info ts=2026-03-09T14:32:54.334Z caller=main.go:557 msg="Received SIGTERM, exiting gracefully..." 2026-03-09T14:32:54.517 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42552]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-alertmanager-a 2026-03-09T14:32:54.517 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42585]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-alertmanager.a 2026-03-09T14:32:54.517 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@alertmanager.a.service: Deactivated successfully. 2026-03-09T14:32:54.517 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 systemd[1]: Stopped Ceph alertmanager.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:32:54.517 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 systemd[1]: Started Ceph alertmanager.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:32:54.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42609]: level=info ts=2026-03-09T14:32:54.516Z caller=main.go:225 msg="Starting Alertmanager" version="(version=0.23.0, branch=HEAD, revision=61046b17771a57cfd4c4a51be370ab930a4d7d54)" 2026-03-09T14:32:54.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42609]: level=info ts=2026-03-09T14:32:54.516Z caller=main.go:226 build_context="(go=go1.16.7, user=root@e21a959be8d2, date=20210825-10:48:55)" 2026-03-09T14:32:54.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42609]: level=info ts=2026-03-09T14:32:54.518Z caller=cluster.go:184 component=cluster msg="setting advertise address explicitly" addr=192.168.123.107 port=9094 2026-03-09T14:32:54.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42609]: level=info ts=2026-03-09T14:32:54.518Z caller=cluster.go:671 component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T14:32:54.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42609]: level=info ts=2026-03-09T14:32:54.538Z caller=coordinator.go:113 component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T14:32:54.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42609]: level=info ts=2026-03-09T14:32:54.538Z caller=coordinator.go:126 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T14:32:54.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42609]: level=info ts=2026-03-09T14:32:54.540Z caller=main.go:518 msg=Listening address=:9093 2026-03-09T14:32:54.864 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:54 vm07 bash[42609]: level=info ts=2026-03-09T14:32:54.540Z caller=tls_config.go:191 msg="TLS is disabled." http2=false 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 systemd[1]: Stopping Ceph prometheus.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33907]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-prometheus.a 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.787Z caller=main.go:775 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.787Z caller=main.go:798 level=info msg="Stopping scrape discovery manager..." 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.787Z caller=main.go:812 level=info msg="Stopping notify discovery manager..." 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.787Z caller=main.go:834 level=info msg="Stopping scrape manager..." 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.787Z caller=main.go:794 level=info msg="Scrape discovery manager stopped" 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.787Z caller=main.go:808 level=info msg="Notify discovery manager stopped" 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.787Z caller=manager.go:945 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.787Z caller=manager.go:955 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.787Z caller=main.go:828 level=info msg="Scrape manager stopped" 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.788Z caller=notifier.go:600 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T14:32:54.937 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.788Z caller=main.go:1054 level=info msg="Notifier manager stopped" 2026-03-09T14:32:54.938 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33090]: ts=2026-03-09T14:32:54.788Z caller=main.go:1066 level=info msg="See you next time!" 2026-03-09T14:32:54.938 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33914]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-prometheus-a 2026-03-09T14:32:54.938 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 bash[33948]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-prometheus.a 2026-03-09T14:32:54.938 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@prometheus.a.service: Deactivated successfully. 2026-03-09T14:32:54.938 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 systemd[1]: Stopped Ceph prometheus.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:32:54.938 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:54 vm11 systemd[1]: Started Ceph prometheus.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:32:55.197 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:32:55.197 INFO:teuthology.orchestra.run.vm07.stdout: "id": "26cad96a-5191-403a-8074-ea46ba6f5132", 2026-03-09T14:32:55.197 INFO:teuthology.orchestra.run.vm07.stdout: "name": "r", 2026-03-09T14:32:55.197 INFO:teuthology.orchestra.run.vm07.stdout: "current_period": "1e3bb30c-00d5-427f-812a-1f220a7ebff0", 2026-03-09T14:32:55.197 INFO:teuthology.orchestra.run.vm07.stdout: "epoch": 1 2026-03-09T14:32:55.197 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:32:55.257 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.006Z caller=main.go:475 level=info msg="No time or size retention was set so using the default time retention" duration=15d 2026-03-09T14:32:55.257 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.006Z caller=main.go:512 level=info msg="Starting Prometheus" version="(version=2.33.4, branch=HEAD, revision=83032011a5d3e6102624fe58241a374a7201fee8)" 2026-03-09T14:32:55.257 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.006Z caller=main.go:517 level=info build_context="(go=go1.17.7, user=root@d13bf69e7be8, date=20220222-16:51:28)" 2026-03-09T14:32:55.258 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.006Z caller=main.go:518 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm11 (none))" 2026-03-09T14:32:55.258 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.006Z caller=main.go:519 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T14:32:55.258 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.006Z caller=main.go:520 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T14:32:55.258 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.008Z caller=web.go:570 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T14:32:55.258 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.008Z caller=main.go:923 level=info msg="Starting TSDB ..." 2026-03-09T14:32:55.258 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.009Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false 2026-03-09T14:32:55.258 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.011Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T14:32:55.258 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.011Z caller=head.go:527 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.013µs 2026-03-09T14:32:55.258 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:55 vm11 bash[33974]: ts=2026-03-09T14:32:55.011Z caller=head.go:533 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: cephadm 2026-03-09T14:32:53.946714+0000 mgr.y (mgr.24310) 38 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: cephadm 2026-03-09T14:32:53.948511+0000 mgr.y (mgr.24310) 39 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm07 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: cluster 2026-03-09T14:32:54.126844+0000 mon.a (mon.0) 617 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.136396+0000 mon.a (mon.0) 618 : audit [INF] from='client.? 192.168.123.107:0/2288661750' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.399863+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: cephadm 2026-03-09T14:32:54.401889+0000 mgr.y (mgr.24310) 40 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: cephadm 2026-03-09T14:32:54.403754+0000 mgr.y (mgr.24310) 41 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.863432+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.868025+0000 mon.b (mon.2) 59 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.869893+0000 mon.b (mon.2) 60 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.107:9093"}]: dispatch 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.875453+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.887550+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.889173+0000 mon.b (mon.2) 62 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.111:3000"}]: dispatch 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.897539+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.900927+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.907487+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.111:9095"}]: dispatch 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.913851+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.917346+0000 mon.b (mon.2) 65 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:55.258 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:55 vm11 bash[17885]: audit 2026-03-09T14:32:54.918721+0000 mon.b (mon.2) 66 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:55.259 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin zonegroup create --rgw-zonegroup=default --master --default' 2026-03-09T14:32:55.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: cephadm 2026-03-09T14:32:53.946714+0000 mgr.y (mgr.24310) 38 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: cephadm 2026-03-09T14:32:53.948511+0000 mgr.y (mgr.24310) 39 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm07 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: cluster 2026-03-09T14:32:54.126844+0000 mon.a (mon.0) 617 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.136396+0000 mon.a (mon.0) 618 : audit [INF] from='client.? 192.168.123.107:0/2288661750' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.399863+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: cephadm 2026-03-09T14:32:54.401889+0000 mgr.y (mgr.24310) 40 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: cephadm 2026-03-09T14:32:54.403754+0000 mgr.y (mgr.24310) 41 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.863432+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.868025+0000 mon.b (mon.2) 59 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.869893+0000 mon.b (mon.2) 60 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.107:9093"}]: dispatch 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.875453+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.887550+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.889173+0000 mon.b (mon.2) 62 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.111:3000"}]: dispatch 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.897539+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.900927+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.907487+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.111:9095"}]: dispatch 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.913851+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.917346+0000 mon.b (mon.2) 65 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:55.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:55 vm07 bash[17480]: audit 2026-03-09T14:32:54.918721+0000 mon.b (mon.2) 66 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: cephadm 2026-03-09T14:32:53.946714+0000 mgr.y (mgr.24310) 38 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: cephadm 2026-03-09T14:32:53.948511+0000 mgr.y (mgr.24310) 39 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm07 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: cluster 2026-03-09T14:32:54.126844+0000 mon.a (mon.0) 617 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.136396+0000 mon.a (mon.0) 618 : audit [INF] from='client.? 192.168.123.107:0/2288661750' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.399863+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: cephadm 2026-03-09T14:32:54.401889+0000 mgr.y (mgr.24310) 40 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: cephadm 2026-03-09T14:32:54.403754+0000 mgr.y (mgr.24310) 41 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.863432+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.868025+0000 mon.b (mon.2) 59 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.869893+0000 mon.b (mon.2) 60 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.107:9093"}]: dispatch 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.875453+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.887550+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.889173+0000 mon.b (mon.2) 62 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.111:3000"}]: dispatch 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.897539+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.900927+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.907487+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.111:9095"}]: dispatch 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.913851+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.917346+0000 mon.b (mon.2) 65 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:32:55.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:55 vm07 bash[22585]: audit 2026-03-09T14:32:54.918721+0000 mon.b (mon.2) 66 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "id": "05d682be-e279-4c2e-9736-4f20fdd8ea4a", 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "name": "default", 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "api_name": "default", 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "is_master": "true", 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "endpoints": [], 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "hostnames": [], 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "hostnames_s3website": [], 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "master_zone": "", 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "zones": [], 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "placement_targets": [], 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "default_placement": "", 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "realm_id": "26cad96a-5191-403a-8074-ea46ba6f5132", 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "sync_policy": { 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: "groups": [] 2026-03-09T14:32:55.629 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:32:55.630 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:32:55.679 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=z --master --default' 2026-03-09T14:32:56.123 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "id": "76d7a7f6-1e92-409d-9f83-fc503f578da9", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "name": "z", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "domain_root": "z.rgw.meta:root", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "control_pool": "z.rgw.control", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "gc_pool": "z.rgw.log:gc", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "lc_pool": "z.rgw.log:lc", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "log_pool": "z.rgw.log", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "intent_log_pool": "z.rgw.log:intent", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "usage_log_pool": "z.rgw.log:usage", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "roles_pool": "z.rgw.meta:roles", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "reshard_pool": "z.rgw.log:reshard", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "user_keys_pool": "z.rgw.meta:users.keys", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "user_email_pool": "z.rgw.meta:users.email", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "user_swift_pool": "z.rgw.meta:users.swift", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "user_uid_pool": "z.rgw.meta:users.uid", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "otp_pool": "z.rgw.otp", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "system_key": { 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "access_key": "", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "secret_key": "" 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "placement_pools": [ 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: { 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "key": "default-placement", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "val": { 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "index_pool": "z.rgw.buckets.index", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "storage_classes": { 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "STANDARD": { 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "data_pool": "z.rgw.buckets.data" 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "data_extra_pool": "z.rgw.buckets.non-ec", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "index_type": 0 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "realm_id": "26cad96a-5191-403a-8074-ea46ba6f5132", 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout: "notif_pool": "z.rgw.log:notif" 2026-03-09T14:32:56.124 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:32:56.191 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin period update --rgw-realm=r --commit' 2026-03-09T14:32:56.399 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:56 vm11 bash[33974]: ts=2026-03-09T14:32:56.379Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-09T14:32:56.399 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:56 vm11 bash[33974]: ts=2026-03-09T14:32:56.379Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-09T14:32:56.399 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:56 vm11 bash[33974]: ts=2026-03-09T14:32:56.379Z caller=head.go:610 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=216.616µs wal_replay_duration=1.3678791s total_replay_duration=1.36810889s 2026-03-09T14:32:56.399 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:56 vm11 bash[33974]: ts=2026-03-09T14:32:56.381Z caller=main.go:944 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T14:32:56.399 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:56 vm11 bash[33974]: ts=2026-03-09T14:32:56.381Z caller=main.go:947 level=info msg="TSDB started" 2026-03-09T14:32:56.399 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:56 vm11 bash[33974]: ts=2026-03-09T14:32:56.381Z caller=main.go:1128 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T14:32:56.399 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:56 vm11 bash[33974]: ts=2026-03-09T14:32:56.398Z caller=main.go:1165 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=16.582677ms db_storage=1.002µs remote_storage=1.664µs web_handler=491ns query_engine=781ns scrape=1.099026ms scrape_sd=29.547µs notify=22.523µs notify_sd=13.757µs rules=14.97225ms 2026-03-09T14:32:56.399 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:56 vm11 bash[17885]: audit 2026-03-09T14:32:54.869113+0000 mgr.y (mgr.24310) 42 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:32:56.399 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:56 vm11 bash[17885]: audit 2026-03-09T14:32:54.870686+0000 mgr.y (mgr.24310) 43 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.107:9093"}]: dispatch 2026-03-09T14:32:56.399 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:56 vm11 bash[17885]: audit 2026-03-09T14:32:54.888369+0000 mgr.y (mgr.24310) 44 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:32:56.399 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:56 vm11 bash[17885]: audit 2026-03-09T14:32:54.889736+0000 mgr.y (mgr.24310) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.111:3000"}]: dispatch 2026-03-09T14:32:56.399 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:56 vm11 bash[17885]: audit 2026-03-09T14:32:54.901606+0000 mgr.y (mgr.24310) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:32:56.400 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:56 vm11 bash[17885]: audit 2026-03-09T14:32:54.908053+0000 mgr.y (mgr.24310) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.111:9095"}]: dispatch 2026-03-09T14:32:56.400 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:56 vm11 bash[17885]: cluster 2026-03-09T14:32:55.008494+0000 mgr.y (mgr.24310) 48 : cluster [DBG] pgmap v20: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:56.400 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:56 vm11 bash[17885]: audit 2026-03-09T14:32:55.132130+0000 mon.a (mon.0) 624 : audit [INF] from='client.? 192.168.123.107:0/2288661750' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T14:32:56.400 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:56 vm11 bash[17885]: cluster 2026-03-09T14:32:55.132182+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:32:56.400 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:56 vm11 bash[17885]: cluster 2026-03-09T14:32:56.134784+0000 mon.a (mon.0) 626 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[17480]: audit 2026-03-09T14:32:54.869113+0000 mgr.y (mgr.24310) 42 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[17480]: audit 2026-03-09T14:32:54.870686+0000 mgr.y (mgr.24310) 43 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.107:9093"}]: dispatch 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[17480]: audit 2026-03-09T14:32:54.888369+0000 mgr.y (mgr.24310) 44 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[17480]: audit 2026-03-09T14:32:54.889736+0000 mgr.y (mgr.24310) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.111:3000"}]: dispatch 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[17480]: audit 2026-03-09T14:32:54.901606+0000 mgr.y (mgr.24310) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[17480]: audit 2026-03-09T14:32:54.908053+0000 mgr.y (mgr.24310) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.111:9095"}]: dispatch 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[17480]: cluster 2026-03-09T14:32:55.008494+0000 mgr.y (mgr.24310) 48 : cluster [DBG] pgmap v20: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[17480]: audit 2026-03-09T14:32:55.132130+0000 mon.a (mon.0) 624 : audit [INF] from='client.? 192.168.123.107:0/2288661750' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[17480]: cluster 2026-03-09T14:32:55.132182+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[17480]: cluster 2026-03-09T14:32:56.134784+0000 mon.a (mon.0) 626 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:32:56.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:56 vm07 bash[22585]: audit 2026-03-09T14:32:54.869113+0000 mgr.y (mgr.24310) 42 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:32:56.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:56 vm07 bash[22585]: audit 2026-03-09T14:32:54.870686+0000 mgr.y (mgr.24310) 43 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.107:9093"}]: dispatch 2026-03-09T14:32:56.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:56 vm07 bash[22585]: audit 2026-03-09T14:32:54.888369+0000 mgr.y (mgr.24310) 44 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:32:56.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:56 vm07 bash[22585]: audit 2026-03-09T14:32:54.889736+0000 mgr.y (mgr.24310) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.111:3000"}]: dispatch 2026-03-09T14:32:56.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:56 vm07 bash[22585]: audit 2026-03-09T14:32:54.901606+0000 mgr.y (mgr.24310) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:32:56.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:56 vm07 bash[22585]: audit 2026-03-09T14:32:54.908053+0000 mgr.y (mgr.24310) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.111:9095"}]: dispatch 2026-03-09T14:32:56.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:56 vm07 bash[22585]: cluster 2026-03-09T14:32:55.008494+0000 mgr.y (mgr.24310) 48 : cluster [DBG] pgmap v20: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:56.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:56 vm07 bash[22585]: audit 2026-03-09T14:32:55.132130+0000 mon.a (mon.0) 624 : audit [INF] from='client.? 192.168.123.107:0/2288661750' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T14:32:56.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:56 vm07 bash[22585]: cluster 2026-03-09T14:32:55.132182+0000 mon.a (mon.0) 625 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:32:56.415 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:56 vm07 bash[22585]: cluster 2026-03-09T14:32:56.134784+0000 mon.a (mon.0) 626 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:32:56.757 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:32:56 vm11 bash[33974]: ts=2026-03-09T14:32:56.398Z caller=main.go:896 level=info msg="Server is ready to receive web requests." 2026-03-09T14:32:56.914 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:32:56 vm07 bash[42609]: level=info ts=2026-03-09T14:32:56.519Z caller=cluster.go:696 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000359025s 2026-03-09T14:32:58.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:58 vm07 bash[22585]: cluster 2026-03-09T14:32:57.008873+0000 mgr.y (mgr.24310) 49 : cluster [DBG] pgmap v23: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:58.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:58 vm07 bash[22585]: cluster 2026-03-09T14:32:57.162726+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:32:58.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:58 vm07 bash[22585]: audit 2026-03-09T14:32:57.167004+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.107:0/441687261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:32:58.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:58 vm07 bash[22585]: audit 2026-03-09T14:32:57.175708+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:32:58.414 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:58 vm07 bash[22585]: audit 2026-03-09T14:32:58.075539+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:58.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:58 vm07 bash[17480]: cluster 2026-03-09T14:32:57.008873+0000 mgr.y (mgr.24310) 49 : cluster [DBG] pgmap v23: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:58.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:58 vm07 bash[17480]: cluster 2026-03-09T14:32:57.162726+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:32:58.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:58 vm07 bash[17480]: audit 2026-03-09T14:32:57.167004+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.107:0/441687261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:32:58.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:58 vm07 bash[17480]: audit 2026-03-09T14:32:57.175708+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:32:58.414 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:58 vm07 bash[17480]: audit 2026-03-09T14:32:58.075539+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:58.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:58 vm11 bash[17885]: cluster 2026-03-09T14:32:57.008873+0000 mgr.y (mgr.24310) 49 : cluster [DBG] pgmap v23: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:32:58.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:58 vm11 bash[17885]: cluster 2026-03-09T14:32:57.162726+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:32:58.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:58 vm11 bash[17885]: audit 2026-03-09T14:32:57.167004+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.107:0/441687261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:32:58.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:58 vm11 bash[17885]: audit 2026-03-09T14:32:57.175708+0000 mon.a (mon.0) 628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:32:58.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:58 vm11 bash[17885]: audit 2026-03-09T14:32:58.075539+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:59.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:59 vm11 bash[17885]: audit 2026-03-09T14:32:58.176523+0000 mon.a (mon.0) 630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-09T14:32:59.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:59 vm11 bash[17885]: cluster 2026-03-09T14:32:58.177135+0000 mon.a (mon.0) 631 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:32:59.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:59 vm11 bash[17885]: audit 2026-03-09T14:32:58.924246+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:59.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:32:59 vm11 bash[17885]: audit 2026-03-09T14:32:58.930970+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:59.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:59 vm07 bash[22585]: audit 2026-03-09T14:32:58.176523+0000 mon.a (mon.0) 630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-09T14:32:59.664 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:59 vm07 bash[22585]: cluster 2026-03-09T14:32:58.177135+0000 mon.a (mon.0) 631 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:32:59.664 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:59 vm07 bash[22585]: audit 2026-03-09T14:32:58.924246+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:59.664 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:32:59 vm07 bash[22585]: audit 2026-03-09T14:32:58.930970+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:59.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:59 vm07 bash[17480]: audit 2026-03-09T14:32:58.176523+0000 mon.a (mon.0) 630 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-09T14:32:59.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:59 vm07 bash[17480]: cluster 2026-03-09T14:32:58.177135+0000 mon.a (mon.0) 631 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:32:59.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:59 vm07 bash[17480]: audit 2026-03-09T14:32:58.924246+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:32:59.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:32:59 vm07 bash[17480]: audit 2026-03-09T14:32:58.930970+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:00.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:00 vm11 bash[17885]: cluster 2026-03-09T14:32:59.009256+0000 mgr.y (mgr.24310) 50 : cluster [DBG] pgmap v26: 65 pgs: 64 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:33:00.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:00 vm11 bash[17885]: cluster 2026-03-09T14:32:59.190728+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:33:00.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:00 vm11 bash[17885]: audit 2026-03-09T14:32:59.197633+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/441687261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:33:00.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:00 vm11 bash[17885]: audit 2026-03-09T14:32:59.197837+0000 mon.a (mon.0) 635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:33:00.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:00 vm07 bash[22585]: cluster 2026-03-09T14:32:59.009256+0000 mgr.y (mgr.24310) 50 : cluster [DBG] pgmap v26: 65 pgs: 64 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:33:00.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:00 vm07 bash[22585]: cluster 2026-03-09T14:32:59.190728+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:33:00.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:00 vm07 bash[22585]: audit 2026-03-09T14:32:59.197633+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/441687261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:33:00.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:00 vm07 bash[22585]: audit 2026-03-09T14:32:59.197837+0000 mon.a (mon.0) 635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:33:00.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:00 vm07 bash[17480]: cluster 2026-03-09T14:32:59.009256+0000 mgr.y (mgr.24310) 50 : cluster [DBG] pgmap v26: 65 pgs: 64 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:33:00.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:00 vm07 bash[17480]: cluster 2026-03-09T14:32:59.190728+0000 mon.a (mon.0) 634 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:33:00.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:00 vm07 bash[17480]: audit 2026-03-09T14:32:59.197633+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/441687261' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:33:00.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:00 vm07 bash[17480]: audit 2026-03-09T14:32:59.197837+0000 mon.a (mon.0) 635 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:33:01.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:01 vm11 bash[17885]: audit 2026-03-09T14:33:00.195231+0000 mon.a (mon.0) 636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-09T14:33:01.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:01 vm11 bash[17885]: cluster 2026-03-09T14:33:00.212257+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:33:01.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:01 vm07 bash[22585]: audit 2026-03-09T14:33:00.195231+0000 mon.a (mon.0) 636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-09T14:33:01.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:01 vm07 bash[22585]: cluster 2026-03-09T14:33:00.212257+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:33:01.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:01 vm07 bash[17480]: audit 2026-03-09T14:33:00.195231+0000 mon.a (mon.0) 636 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-09T14:33:01.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:01 vm07 bash[17480]: cluster 2026-03-09T14:33:00.212257+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:33:02.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:02 vm11 bash[17885]: cluster 2026-03-09T14:33:01.009627+0000 mgr.y (mgr.24310) 51 : cluster [DBG] pgmap v29: 97 pgs: 32 unknown, 65 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-09T14:33:02.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:02 vm11 bash[17885]: cluster 2026-03-09T14:33:01.212213+0000 mon.a (mon.0) 638 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:33:02.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:02 vm11 bash[17885]: audit 2026-03-09T14:33:01.225907+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:33:02.535 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:02 vm07 bash[22585]: cluster 2026-03-09T14:33:01.009627+0000 mgr.y (mgr.24310) 51 : cluster [DBG] pgmap v29: 97 pgs: 32 unknown, 65 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-09T14:33:02.535 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:02 vm07 bash[22585]: cluster 2026-03-09T14:33:01.212213+0000 mon.a (mon.0) 638 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:33:02.535 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:02 vm07 bash[22585]: audit 2026-03-09T14:33:01.225907+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:33:02.535 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:02 vm07 bash[17480]: cluster 2026-03-09T14:33:01.009627+0000 mgr.y (mgr.24310) 51 : cluster [DBG] pgmap v29: 97 pgs: 32 unknown, 65 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-09T14:33:02.535 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:02 vm07 bash[17480]: cluster 2026-03-09T14:33:01.212213+0000 mon.a (mon.0) 638 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:33:02.535 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:02 vm07 bash[17480]: audit 2026-03-09T14:33:01.225907+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:33:02.913 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:02 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:02] "GET /metrics HTTP/1.1" 200 191106 "" "Prometheus/2.33.4" 2026-03-09T14:33:03.507 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:33:03 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:03] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:33:03.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:03 vm11 bash[17885]: audit 2026-03-09T14:33:02.210442+0000 mon.a (mon.0) 640 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-09T14:33:03.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:03 vm11 bash[17885]: cluster 2026-03-09T14:33:02.210540+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:33:03.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:03 vm11 bash[17885]: audit 2026-03-09T14:33:02.239747+0000 mon.a (mon.0) 642 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:33:03.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:03 vm11 bash[17885]: audit 2026-03-09T14:33:03.208833+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T14:33:03.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:03 vm11 bash[17885]: cluster 2026-03-09T14:33:03.208943+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:33:03.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:03 vm11 bash[17885]: audit 2026-03-09T14:33:03.212151+0000 mon.a (mon.0) 645 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-09T14:33:03.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:03 vm07 bash[22585]: audit 2026-03-09T14:33:02.210442+0000 mon.a (mon.0) 640 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-09T14:33:03.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:03 vm07 bash[22585]: cluster 2026-03-09T14:33:02.210540+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:33:03.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:03 vm07 bash[22585]: audit 2026-03-09T14:33:02.239747+0000 mon.a (mon.0) 642 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:33:03.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:03 vm07 bash[22585]: audit 2026-03-09T14:33:03.208833+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T14:33:03.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:03 vm07 bash[22585]: cluster 2026-03-09T14:33:03.208943+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:33:03.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:03 vm07 bash[22585]: audit 2026-03-09T14:33:03.212151+0000 mon.a (mon.0) 645 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-09T14:33:03.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:03 vm07 bash[17480]: audit 2026-03-09T14:33:02.210442+0000 mon.a (mon.0) 640 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-09T14:33:03.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:03 vm07 bash[17480]: cluster 2026-03-09T14:33:02.210540+0000 mon.a (mon.0) 641 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:33:03.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:03 vm07 bash[17480]: audit 2026-03-09T14:33:02.239747+0000 mon.a (mon.0) 642 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:33:03.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:03 vm07 bash[17480]: audit 2026-03-09T14:33:03.208833+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T14:33:03.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:03 vm07 bash[17480]: cluster 2026-03-09T14:33:03.208943+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:33:03.664 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:03 vm07 bash[17480]: audit 2026-03-09T14:33:03.212151+0000 mon.a (mon.0) 645 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-09T14:33:04.492 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "id": "df1c3807-0dea-4f6e-b6a2-65bf0b9c2457", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "epoch": 1, 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "predecessor_uuid": "1e3bb30c-00d5-427f-812a-1f220a7ebff0", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "sync_status": [], 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "period_map": { 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "id": "df1c3807-0dea-4f6e-b6a2-65bf0b9c2457", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "zonegroups": [ 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: { 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "id": "05d682be-e279-4c2e-9736-4f20fdd8ea4a", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "name": "default", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "api_name": "default", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "is_master": "true", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "endpoints": [], 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "hostnames": [], 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "hostnames_s3website": [], 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "master_zone": "76d7a7f6-1e92-409d-9f83-fc503f578da9", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "zones": [ 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: { 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "id": "76d7a7f6-1e92-409d-9f83-fc503f578da9", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "name": "z", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "endpoints": [], 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "log_meta": "false", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "log_data": "false", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "bucket_index_max_shards": 11, 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "read_only": "false", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "tier_type": "", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "sync_from_all": "true", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "sync_from": [], 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "redirect_zone": "" 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "placement_targets": [ 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: { 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "name": "default-placement", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "tags": [], 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "storage_classes": [ 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "STANDARD" 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "default_placement": "default-placement", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "realm_id": "26cad96a-5191-403a-8074-ea46ba6f5132", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "sync_policy": { 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "groups": [] 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "short_zone_ids": [ 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: { 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "key": "76d7a7f6-1e92-409d-9f83-fc503f578da9", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "val": 2391214498 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "master_zonegroup": "05d682be-e279-4c2e-9736-4f20fdd8ea4a", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "master_zone": "76d7a7f6-1e92-409d-9f83-fc503f578da9", 2026-03-09T14:33:04.493 INFO:teuthology.orchestra.run.vm07.stdout: "period_config": { 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "bucket_quota": { 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "enabled": false, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "check_on_raw": false, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_size": -1, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_size_kb": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_objects": -1 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "user_quota": { 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "enabled": false, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "check_on_raw": false, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_size": -1, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_size_kb": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_objects": -1 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "user_ratelimit": { 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_read_ops": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_write_ops": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_read_bytes": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_write_bytes": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "enabled": false 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "bucket_ratelimit": { 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_read_ops": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_write_ops": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_read_bytes": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_write_bytes": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "enabled": false 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "anonymous_ratelimit": { 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_read_ops": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_write_ops": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_read_bytes": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "max_write_bytes": 0, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "enabled": false 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "realm_id": "26cad96a-5191-403a-8074-ea46ba6f5132", 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "realm_name": "r", 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout: "realm_epoch": 2 2026-03-09T14:33:04.494 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:33:04.504 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:04 vm07 bash[22585]: cluster 2026-03-09T14:33:03.010009+0000 mgr.y (mgr.24310) 52 : cluster [DBG] pgmap v32: 129 pgs: 64 unknown, 65 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-09T14:33:04.504 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:04 vm07 bash[22585]: audit 2026-03-09T14:33:04.208500+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-09T14:33:04.504 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:04 vm07 bash[22585]: cluster 2026-03-09T14:33:04.208658+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:33:04.505 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:04 vm07 bash[17480]: cluster 2026-03-09T14:33:03.010009+0000 mgr.y (mgr.24310) 52 : cluster [DBG] pgmap v32: 129 pgs: 64 unknown, 65 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-09T14:33:04.505 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:04 vm07 bash[17480]: audit 2026-03-09T14:33:04.208500+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-09T14:33:04.505 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:04 vm07 bash[17480]: cluster 2026-03-09T14:33:04.208658+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:33:04.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:04 vm11 bash[17885]: cluster 2026-03-09T14:33:03.010009+0000 mgr.y (mgr.24310) 52 : cluster [DBG] pgmap v32: 129 pgs: 64 unknown, 65 active+clean; 451 KiB data, 51 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-09T14:33:04.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:04 vm11 bash[17885]: audit 2026-03-09T14:33:04.208500+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.107:0/2734446433' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-09T14:33:04.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:04 vm11 bash[17885]: cluster 2026-03-09T14:33:04.208658+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:33:04.577 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch apply rgw foo --realm r --zone z --placement=2 --port=8000' 2026-03-09T14:33:04.756 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:04 vm07 bash[42609]: level=info ts=2026-03-09T14:33:04.522Z caller=cluster.go:688 component=cluster msg="gossip settled; proceeding" elapsed=10.00337831s 2026-03-09T14:33:05.042 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled rgw.foo update... 2026-03-09T14:33:05.129 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch apply rgw smpl' 2026-03-09T14:33:05.612 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled rgw.smpl update... 2026-03-09T14:33:05.676 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph osd pool create foo' 2026-03-09T14:33:06.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:06 vm07 bash[22585]: cluster 2026-03-09T14:33:05.010331+0000 mgr.y (mgr.24310) 53 : cluster [DBG] pgmap v35: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T14:33:06.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:06 vm07 bash[22585]: audit 2026-03-09T14:33:05.032256+0000 mgr.y (mgr.24310) 54 : audit [DBG] from='client.24499 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:33:06.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:06 vm07 bash[22585]: cephadm 2026-03-09T14:33:05.033450+0000 mgr.y (mgr.24310) 55 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T14:33:06.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:06 vm07 bash[22585]: audit 2026-03-09T14:33:05.039245+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:06.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:06 vm07 bash[22585]: audit 2026-03-09T14:33:05.062971+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:06.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:06 vm07 bash[22585]: audit 2026-03-09T14:33:05.064358+0000 mon.b (mon.2) 68 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:06.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:06 vm07 bash[22585]: audit 2026-03-09T14:33:05.601866+0000 mgr.y (mgr.24310) 56 : audit [DBG] from='client.24505 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "smpl", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:33:06.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:06 vm07 bash[22585]: cephadm 2026-03-09T14:33:05.602588+0000 mgr.y (mgr.24310) 57 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-09T14:33:06.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:06 vm07 bash[22585]: audit 2026-03-09T14:33:05.608949+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:06.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:06 vm07 bash[17480]: cluster 2026-03-09T14:33:05.010331+0000 mgr.y (mgr.24310) 53 : cluster [DBG] pgmap v35: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T14:33:06.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:06 vm07 bash[17480]: audit 2026-03-09T14:33:05.032256+0000 mgr.y (mgr.24310) 54 : audit [DBG] from='client.24499 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:33:06.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:06 vm07 bash[17480]: cephadm 2026-03-09T14:33:05.033450+0000 mgr.y (mgr.24310) 55 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T14:33:06.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:06 vm07 bash[17480]: audit 2026-03-09T14:33:05.039245+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:06.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:06 vm07 bash[17480]: audit 2026-03-09T14:33:05.062971+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:06.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:06 vm07 bash[17480]: audit 2026-03-09T14:33:05.064358+0000 mon.b (mon.2) 68 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:06.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:06 vm07 bash[17480]: audit 2026-03-09T14:33:05.601866+0000 mgr.y (mgr.24310) 56 : audit [DBG] from='client.24505 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "smpl", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:33:06.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:06 vm07 bash[17480]: cephadm 2026-03-09T14:33:05.602588+0000 mgr.y (mgr.24310) 57 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-09T14:33:06.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:06 vm07 bash[17480]: audit 2026-03-09T14:33:05.608949+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:06.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:06 vm11 bash[17885]: cluster 2026-03-09T14:33:05.010331+0000 mgr.y (mgr.24310) 53 : cluster [DBG] pgmap v35: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-09T14:33:06.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:06 vm11 bash[17885]: audit 2026-03-09T14:33:05.032256+0000 mgr.y (mgr.24310) 54 : audit [DBG] from='client.24499 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:33:06.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:06 vm11 bash[17885]: cephadm 2026-03-09T14:33:05.033450+0000 mgr.y (mgr.24310) 55 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T14:33:06.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:06 vm11 bash[17885]: audit 2026-03-09T14:33:05.039245+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:06.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:06 vm11 bash[17885]: audit 2026-03-09T14:33:05.062971+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:06.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:06 vm11 bash[17885]: audit 2026-03-09T14:33:05.064358+0000 mon.b (mon.2) 68 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:06.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:06 vm11 bash[17885]: audit 2026-03-09T14:33:05.601866+0000 mgr.y (mgr.24310) 56 : audit [DBG] from='client.24505 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "smpl", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:33:06.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:06 vm11 bash[17885]: cephadm 2026-03-09T14:33:05.602588+0000 mgr.y (mgr.24310) 57 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-09T14:33:06.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:06 vm11 bash[17885]: audit 2026-03-09T14:33:05.608949+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:07.079 INFO:teuthology.orchestra.run.vm07.stderr:pool 'foo' created 2026-03-09T14:33:07.159 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'rbd pool init foo' 2026-03-09T14:33:07.310 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:07 vm07 bash[22585]: audit 2026-03-09T14:33:06.171194+0000 mon.c (mon.1) 25 : audit [INF] from='client.? 192.168.123.107:0/2751742128' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T14:33:07.310 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:07 vm07 bash[22585]: audit 2026-03-09T14:33:06.171478+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T14:33:07.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:07 vm07 bash[17480]: audit 2026-03-09T14:33:06.171194+0000 mon.c (mon.1) 25 : audit [INF] from='client.? 192.168.123.107:0/2751742128' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T14:33:07.310 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:07 vm07 bash[17480]: audit 2026-03-09T14:33:06.171478+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T14:33:07.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:07 vm11 bash[17885]: audit 2026-03-09T14:33:06.171194+0000 mon.c (mon.1) 25 : audit [INF] from='client.? 192.168.123.107:0/2751742128' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T14:33:07.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:07 vm11 bash[17885]: audit 2026-03-09T14:33:06.171478+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-09T14:33:08.359 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:08 vm07 bash[22585]: cluster 2026-03-09T14:33:07.010667+0000 mgr.y (mgr.24310) 58 : cluster [DBG] pgmap v36: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 176 B/s rd, 352 B/s wr, 1 op/s 2026-03-09T14:33:08.359 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:08 vm07 bash[22585]: audit 2026-03-09T14:33:07.072860+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-09T14:33:08.360 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:08 vm07 bash[22585]: cluster 2026-03-09T14:33:07.072959+0000 mon.a (mon.0) 652 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:33:08.360 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:08 vm07 bash[22585]: audit 2026-03-09T14:33:07.474909+0000 mon.a (mon.0) 653 : audit [INF] from='client.? 192.168.123.107:0/1910254680' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-09T14:33:08.360 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:08 vm07 bash[17480]: cluster 2026-03-09T14:33:07.010667+0000 mgr.y (mgr.24310) 58 : cluster [DBG] pgmap v36: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 176 B/s rd, 352 B/s wr, 1 op/s 2026-03-09T14:33:08.360 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:08 vm07 bash[17480]: audit 2026-03-09T14:33:07.072860+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-09T14:33:08.360 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:08 vm07 bash[17480]: cluster 2026-03-09T14:33:07.072959+0000 mon.a (mon.0) 652 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:33:08.360 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:08 vm07 bash[17480]: audit 2026-03-09T14:33:07.474909+0000 mon.a (mon.0) 653 : audit [INF] from='client.? 192.168.123.107:0/1910254680' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-09T14:33:08.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:08 vm11 bash[17885]: cluster 2026-03-09T14:33:07.010667+0000 mgr.y (mgr.24310) 58 : cluster [DBG] pgmap v36: 129 pgs: 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 176 B/s rd, 352 B/s wr, 1 op/s 2026-03-09T14:33:08.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:08 vm11 bash[17885]: audit 2026-03-09T14:33:07.072860+0000 mon.a (mon.0) 651 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-09T14:33:08.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:08 vm11 bash[17885]: cluster 2026-03-09T14:33:07.072959+0000 mon.a (mon.0) 652 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:33:08.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:08 vm11 bash[17885]: audit 2026-03-09T14:33:07.474909+0000 mon.a (mon.0) 653 : audit [INF] from='client.? 192.168.123.107:0/1910254680' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-09T14:33:08.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.908 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.909 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:08.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:08 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.086302+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.107:0/1910254680' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: cluster 2026-03-09T14:33:08.086375+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.161795+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.194224+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.200325+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.212320+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.220563+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: cephadm 2026-03-09T14:33:08.232360+0000 mgr.y (mgr.24310) 59 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.237677+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.238265+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.238912+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.243402+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.249603+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.250353+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: cephadm 2026-03-09T14:33:08.251550+0000 mgr.y (mgr.24310) 60 : cephadm [INF] Deploying daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.938910+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.941118+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.941762+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.945749+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.951664+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: audit 2026-03-09T14:33:08.954738+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:09.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:09 vm07 bash[22585]: cluster 2026-03-09T14:33:09.066642+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.086302+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.107:0/1910254680' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: cluster 2026-03-09T14:33:08.086375+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.161795+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.194224+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.200325+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.212320+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.220563+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: cephadm 2026-03-09T14:33:08.232360+0000 mgr.y (mgr.24310) 59 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.237677+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.238265+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.238912+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.243402+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.249603+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.250353+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: cephadm 2026-03-09T14:33:08.251550+0000 mgr.y (mgr.24310) 60 : cephadm [INF] Deploying daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.938910+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.941118+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.941762+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.945749+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.951664+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: audit 2026-03-09T14:33:08.954738+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:09.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:09 vm07 bash[17480]: cluster 2026-03-09T14:33:09.066642+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.086302+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.107:0/1910254680' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: cluster 2026-03-09T14:33:08.086375+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.161795+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.194224+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.200325+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.212320+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.220563+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: cephadm 2026-03-09T14:33:08.232360+0000 mgr.y (mgr.24310) 59 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.237677+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.238265+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.238912+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.243402+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.249603+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.250353+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: cephadm 2026-03-09T14:33:08.251550+0000 mgr.y (mgr.24310) 60 : cephadm [INF] Deploying daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.938910+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.941118+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.941762+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.945749+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.951664+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: audit 2026-03-09T14:33:08.954738+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:09.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 bash[17885]: cluster 2026-03-09T14:33:09.066642+0000 mon.a (mon.0) 669 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:33:09.747 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:09.748 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:09.748 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:09.748 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:09.748 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:09.748 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:09.748 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:09.748 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:09.748 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.007 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.007 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.007 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.007 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.007 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.007 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.008 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.008 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.008 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 bash[17480]: cephadm 2026-03-09T14:33:08.955983+0000 mgr.y (mgr.24310) 61 : cephadm [INF] Deploying daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:33:10.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 bash[17480]: cluster 2026-03-09T14:33:09.011032+0000 mgr.y (mgr.24310) 62 : cluster [DBG] pgmap v39: 161 pgs: 32 unknown, 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 176 B/s rd, 352 B/s wr, 1 op/s 2026-03-09T14:33:10.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 bash[17480]: audit 2026-03-09T14:33:09.876910+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:10.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 bash[17480]: audit 2026-03-09T14:33:09.886406+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:10.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 bash[17480]: audit 2026-03-09T14:33:09.891650+0000 mon.b (mon.2) 73 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:10.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 bash[17480]: audit 2026-03-09T14:33:09.892353+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:10.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 bash[17480]: audit 2026-03-09T14:33:09.896521+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:10.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 bash[17480]: audit 2026-03-09T14:33:09.902782+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:10.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 bash[17480]: audit 2026-03-09T14:33:09.904937+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:10.164 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 bash[17480]: cluster 2026-03-09T14:33:10.076709+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:33:10.208 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch apply iscsi foo u p' 2026-03-09T14:33:10.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:10 vm11 bash[17885]: cephadm 2026-03-09T14:33:08.955983+0000 mgr.y (mgr.24310) 61 : cephadm [INF] Deploying daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:33:10.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:10 vm11 bash[17885]: cluster 2026-03-09T14:33:09.011032+0000 mgr.y (mgr.24310) 62 : cluster [DBG] pgmap v39: 161 pgs: 32 unknown, 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 176 B/s rd, 352 B/s wr, 1 op/s 2026-03-09T14:33:10.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:10 vm11 bash[17885]: audit 2026-03-09T14:33:09.876910+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:10.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:10 vm11 bash[17885]: audit 2026-03-09T14:33:09.886406+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:10.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:10 vm11 bash[17885]: audit 2026-03-09T14:33:09.891650+0000 mon.b (mon.2) 73 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:10.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:10 vm11 bash[17885]: audit 2026-03-09T14:33:09.892353+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:10.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:10 vm11 bash[17885]: audit 2026-03-09T14:33:09.896521+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:10.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:10 vm11 bash[17885]: audit 2026-03-09T14:33:09.902782+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:10.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:10 vm11 bash[17885]: audit 2026-03-09T14:33:09.904937+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:10.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:10 vm11 bash[17885]: cluster 2026-03-09T14:33:10.076709+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:33:10.529 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 bash[22585]: cephadm 2026-03-09T14:33:08.955983+0000 mgr.y (mgr.24310) 61 : cephadm [INF] Deploying daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:33:10.529 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 bash[22585]: cluster 2026-03-09T14:33:09.011032+0000 mgr.y (mgr.24310) 62 : cluster [DBG] pgmap v39: 161 pgs: 32 unknown, 129 active+clean; 451 KiB data, 53 MiB used, 160 GiB / 160 GiB avail; 176 B/s rd, 352 B/s wr, 1 op/s 2026-03-09T14:33:10.530 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 bash[22585]: audit 2026-03-09T14:33:09.876910+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:10.530 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 bash[22585]: audit 2026-03-09T14:33:09.886406+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:10.530 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 bash[22585]: audit 2026-03-09T14:33:09.891650+0000 mon.b (mon.2) 73 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:10.530 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 bash[22585]: audit 2026-03-09T14:33:09.892353+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:10.530 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 bash[22585]: audit 2026-03-09T14:33:09.896521+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:10.530 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 bash[22585]: audit 2026-03-09T14:33:09.902782+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:10.530 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 bash[22585]: audit 2026-03-09T14:33:09.904937+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:10.530 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 bash[22585]: cluster 2026-03-09T14:33:10.076709+0000 mon.a (mon.0) 675 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:33:10.797 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.798 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.798 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.798 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.798 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.798 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.798 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.798 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:10.798 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.086 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.087 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.087 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.087 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.087 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.087 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.087 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.087 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.088 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:10 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.410 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled iscsi.foo update... 2026-03-09T14:33:11.492 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 120' 2026-03-09T14:33:11.685 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:11 vm07 bash[22585]: cephadm 2026-03-09T14:33:09.880094+0000 mgr.y (mgr.24310) 63 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-09T14:33:11.685 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:11 vm07 bash[22585]: cephadm 2026-03-09T14:33:09.906365+0000 mgr.y (mgr.24310) 64 : cephadm [INF] Deploying daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:33:11.685 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:11 vm07 bash[22585]: audit 2026-03-09T14:33:10.934500+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:11.685 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:11 vm07 bash[22585]: audit 2026-03-09T14:33:10.936004+0000 mon.b (mon.2) 75 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:11.685 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:11 vm07 bash[22585]: audit 2026-03-09T14:33:10.936825+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:11.685 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:11 vm07 bash[22585]: audit 2026-03-09T14:33:10.943072+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:11.685 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:11 vm07 bash[22585]: audit 2026-03-09T14:33:10.960941+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:11.685 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:11 vm07 bash[22585]: audit 2026-03-09T14:33:10.963052+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:11.686 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:11 vm07 bash[22585]: audit 2026-03-09T14:33:11.409881+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:11.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:11 vm07 bash[17480]: cephadm 2026-03-09T14:33:09.880094+0000 mgr.y (mgr.24310) 63 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-09T14:33:11.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:11 vm07 bash[17480]: cephadm 2026-03-09T14:33:09.906365+0000 mgr.y (mgr.24310) 64 : cephadm [INF] Deploying daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:33:11.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:11 vm07 bash[17480]: audit 2026-03-09T14:33:10.934500+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:11.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:11 vm07 bash[17480]: audit 2026-03-09T14:33:10.936004+0000 mon.b (mon.2) 75 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:11.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:11 vm07 bash[17480]: audit 2026-03-09T14:33:10.936825+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:11.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:11 vm07 bash[17480]: audit 2026-03-09T14:33:10.943072+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:11.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:11 vm07 bash[17480]: audit 2026-03-09T14:33:10.960941+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:11.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:11 vm07 bash[17480]: audit 2026-03-09T14:33:10.963052+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:11.686 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:11 vm07 bash[17480]: audit 2026-03-09T14:33:11.409881+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:11.774 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.774 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.774 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.774 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.774 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.774 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.775 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 bash[17885]: cephadm 2026-03-09T14:33:09.880094+0000 mgr.y (mgr.24310) 63 : cephadm [INF] Saving service rgw.smpl spec with placement count:2 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 bash[17885]: cephadm 2026-03-09T14:33:09.906365+0000 mgr.y (mgr.24310) 64 : cephadm [INF] Deploying daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 bash[17885]: audit 2026-03-09T14:33:10.934500+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 bash[17885]: audit 2026-03-09T14:33:10.936004+0000 mon.b (mon.2) 75 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 bash[17885]: audit 2026-03-09T14:33:10.936825+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 bash[17885]: audit 2026-03-09T14:33:10.943072+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 bash[17885]: audit 2026-03-09T14:33:10.960941+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 bash[17885]: audit 2026-03-09T14:33:10.963052+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:11.775 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 bash[17885]: audit 2026-03-09T14:33:11.409881+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:12.087 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:12.087 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:12.087 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:12.087 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:12.087 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:12.087 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:12.087 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:12.087 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:12.088 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:33:11 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:12.793 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:12 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:12] "GET /metrics HTTP/1.1" 200 197459 "" "Prometheus/2.33.4" 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:12 vm07 bash[22585]: cephadm 2026-03-09T14:33:10.964551+0000 mgr.y (mgr.24310) 65 : cephadm [INF] Deploying daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:12 vm07 bash[22585]: cluster 2026-03-09T14:33:11.013333+0000 mgr.y (mgr.24310) 66 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:12 vm07 bash[22585]: audit 2026-03-09T14:33:11.400895+0000 mgr.y (mgr.24310) 67 : audit [DBG] from='client.14661 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:12 vm07 bash[22585]: cephadm 2026-03-09T14:33:11.401733+0000 mgr.y (mgr.24310) 68 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:12 vm07 bash[22585]: audit 2026-03-09T14:33:11.921668+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:12 vm07 bash[22585]: audit 2026-03-09T14:33:11.923631+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:12 vm07 bash[22585]: audit 2026-03-09T14:33:11.924718+0000 mon.b (mon.2) 78 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:12 vm07 bash[17480]: cephadm 2026-03-09T14:33:10.964551+0000 mgr.y (mgr.24310) 65 : cephadm [INF] Deploying daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:12 vm07 bash[17480]: cluster 2026-03-09T14:33:11.013333+0000 mgr.y (mgr.24310) 66 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:12 vm07 bash[17480]: audit 2026-03-09T14:33:11.400895+0000 mgr.y (mgr.24310) 67 : audit [DBG] from='client.14661 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:12 vm07 bash[17480]: cephadm 2026-03-09T14:33:11.401733+0000 mgr.y (mgr.24310) 68 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:12 vm07 bash[17480]: audit 2026-03-09T14:33:11.921668+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:12 vm07 bash[17480]: audit 2026-03-09T14:33:11.923631+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:13.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:12 vm07 bash[17480]: audit 2026-03-09T14:33:11.924718+0000 mon.b (mon.2) 78 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:13.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:12 vm11 bash[17885]: cephadm 2026-03-09T14:33:10.964551+0000 mgr.y (mgr.24310) 65 : cephadm [INF] Deploying daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:33:13.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:12 vm11 bash[17885]: cluster 2026-03-09T14:33:11.013333+0000 mgr.y (mgr.24310) 66 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-09T14:33:13.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:12 vm11 bash[17885]: audit 2026-03-09T14:33:11.400895+0000 mgr.y (mgr.24310) 67 : audit [DBG] from='client.14661 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:33:13.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:12 vm11 bash[17885]: cephadm 2026-03-09T14:33:11.401733+0000 mgr.y (mgr.24310) 68 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-09T14:33:13.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:12 vm11 bash[17885]: audit 2026-03-09T14:33:11.921668+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:13.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:12 vm11 bash[17885]: audit 2026-03-09T14:33:11.923631+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:13.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:12 vm11 bash[17885]: audit 2026-03-09T14:33:11.924718+0000 mon.b (mon.2) 78 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:13.757 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:33:13 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:13] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:33:13.913 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:13 vm07 bash[42609]: level=warn ts=2026-03-09T14:33:13.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:33:13.913 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:13 vm07 bash[42609]: level=warn ts=2026-03-09T14:33:13.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:33:14.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:14 vm07 bash[17480]: cluster 2026-03-09T14:33:13.013751+0000 mgr.y (mgr.24310) 69 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T14:33:14.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:14 vm07 bash[17480]: audit 2026-03-09T14:33:13.131235+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:14.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:14 vm07 bash[22585]: cluster 2026-03-09T14:33:13.013751+0000 mgr.y (mgr.24310) 69 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T14:33:14.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:14 vm07 bash[22585]: audit 2026-03-09T14:33:13.131235+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:14.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:14 vm11 bash[17885]: cluster 2026-03-09T14:33:13.013751+0000 mgr.y (mgr.24310) 69 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 453 KiB data, 55 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-09T14:33:14.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:14 vm11 bash[17885]: audit 2026-03-09T14:33:13.131235+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:16.339 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:16 vm07 bash[22585]: cluster 2026-03-09T14:33:15.014368+0000 mgr.y (mgr.24310) 70 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 456 KiB data, 80 MiB used, 160 GiB / 160 GiB avail; 315 KiB/s rd, 6.5 KiB/s wr, 546 op/s 2026-03-09T14:33:16.339 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:16 vm07 bash[22585]: audit 2026-03-09T14:33:15.064201+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:16.339 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:16 vm07 bash[22585]: audit 2026-03-09T14:33:15.146624+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:16.339 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:16 vm07 bash[22585]: cephadm 2026-03-09T14:33:15.149811+0000 mgr.y (mgr.24310) 71 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:33:16.339 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:16 vm07 bash[22585]: cluster 2026-03-09T14:33:15.634481+0000 mon.a (mon.0) 685 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T14:33:16.339 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:16 vm07 bash[17480]: cluster 2026-03-09T14:33:15.014368+0000 mgr.y (mgr.24310) 70 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 456 KiB data, 80 MiB used, 160 GiB / 160 GiB avail; 315 KiB/s rd, 6.5 KiB/s wr, 546 op/s 2026-03-09T14:33:16.339 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:16 vm07 bash[17480]: audit 2026-03-09T14:33:15.064201+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:16.339 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:16 vm07 bash[17480]: audit 2026-03-09T14:33:15.146624+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:16.339 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:16 vm07 bash[17480]: cephadm 2026-03-09T14:33:15.149811+0000 mgr.y (mgr.24310) 71 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:33:16.339 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:16 vm07 bash[17480]: cluster 2026-03-09T14:33:15.634481+0000 mon.a (mon.0) 685 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T14:33:16.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:16 vm11 bash[17885]: cluster 2026-03-09T14:33:15.014368+0000 mgr.y (mgr.24310) 70 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 456 KiB data, 80 MiB used, 160 GiB / 160 GiB avail; 315 KiB/s rd, 6.5 KiB/s wr, 546 op/s 2026-03-09T14:33:16.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:16 vm11 bash[17885]: audit 2026-03-09T14:33:15.064201+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:16.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:16 vm11 bash[17885]: audit 2026-03-09T14:33:15.146624+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:16.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:16 vm11 bash[17885]: cephadm 2026-03-09T14:33:15.149811+0000 mgr.y (mgr.24310) 71 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:33:16.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:16 vm11 bash[17885]: cluster 2026-03-09T14:33:15.634481+0000 mon.a (mon.0) 685 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-09T14:33:16.633 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.634 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.634 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.634 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.634 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.634 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.634 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.634 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.634 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.893 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.894 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.894 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.894 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.894 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.894 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.894 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.894 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:16.894 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:33:16 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: audit 2026-03-09T14:33:16.065845+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: audit 2026-03-09T14:33:16.079638+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: audit 2026-03-09T14:33:16.085413+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: cephadm 2026-03-09T14:33:16.088842+0000 mgr.y (mgr.24310) 72 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: audit 2026-03-09T14:33:16.096343+0000 mon.b (mon.2) 79 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: audit 2026-03-09T14:33:16.097029+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: audit 2026-03-09T14:33:16.101502+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: audit 2026-03-09T14:33:16.105187+0000 mon.b (mon.2) 80 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: cephadm 2026-03-09T14:33:16.106693+0000 mgr.y (mgr.24310) 73 : cephadm [INF] Deploying daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: audit 2026-03-09T14:33:16.908418+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: audit 2026-03-09T14:33:16.912098+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:17.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:17 vm07 bash[17480]: audit 2026-03-09T14:33:16.912976+0000 mon.b (mon.2) 82 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: audit 2026-03-09T14:33:16.065845+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: audit 2026-03-09T14:33:16.079638+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: audit 2026-03-09T14:33:16.085413+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: cephadm 2026-03-09T14:33:16.088842+0000 mgr.y (mgr.24310) 72 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: audit 2026-03-09T14:33:16.096343+0000 mon.b (mon.2) 79 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: audit 2026-03-09T14:33:16.097029+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: audit 2026-03-09T14:33:16.101502+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: audit 2026-03-09T14:33:16.105187+0000 mon.b (mon.2) 80 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: cephadm 2026-03-09T14:33:16.106693+0000 mgr.y (mgr.24310) 73 : cephadm [INF] Deploying daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: audit 2026-03-09T14:33:16.908418+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: audit 2026-03-09T14:33:16.912098+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:17.164 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:17 vm07 bash[22585]: audit 2026-03-09T14:33:16.912976+0000 mon.b (mon.2) 82 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: audit 2026-03-09T14:33:16.065845+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: audit 2026-03-09T14:33:16.079638+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: audit 2026-03-09T14:33:16.085413+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: cephadm 2026-03-09T14:33:16.088842+0000 mgr.y (mgr.24310) 72 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: audit 2026-03-09T14:33:16.096343+0000 mon.b (mon.2) 79 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: audit 2026-03-09T14:33:16.097029+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: audit 2026-03-09T14:33:16.101502+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: audit 2026-03-09T14:33:16.105187+0000 mon.b (mon.2) 80 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: cephadm 2026-03-09T14:33:16.106693+0000 mgr.y (mgr.24310) 73 : cephadm [INF] Deploying daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: audit 2026-03-09T14:33:16.908418+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: audit 2026-03-09T14:33:16.912098+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:17.257 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:17 vm11 bash[17885]: audit 2026-03-09T14:33:16.912976+0000 mon.b (mon.2) 82 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:18.412 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:18 vm07 bash[22585]: cluster 2026-03-09T14:33:17.014828+0000 mgr.y (mgr.24310) 74 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 456 KiB data, 80 MiB used, 160 GiB / 160 GiB avail; 274 KiB/s rd, 4.8 KiB/s wr, 474 op/s 2026-03-09T14:33:18.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:18 vm07 bash[22585]: audit 2026-03-09T14:33:17.463721+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.107:0/570569823' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:33:18.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:18 vm07 bash[22585]: audit 2026-03-09T14:33:17.653325+0000 mon.c (mon.1) 27 : audit [INF] from='client.? 192.168.123.107:0/668475222' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/142611623"}]: dispatch 2026-03-09T14:33:18.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:18 vm07 bash[22585]: audit 2026-03-09T14:33:17.653764+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/142611623"}]: dispatch 2026-03-09T14:33:18.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:18 vm07 bash[17480]: cluster 2026-03-09T14:33:17.014828+0000 mgr.y (mgr.24310) 74 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 456 KiB data, 80 MiB used, 160 GiB / 160 GiB avail; 274 KiB/s rd, 4.8 KiB/s wr, 474 op/s 2026-03-09T14:33:18.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:18 vm07 bash[17480]: audit 2026-03-09T14:33:17.463721+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.107:0/570569823' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:33:18.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:18 vm07 bash[17480]: audit 2026-03-09T14:33:17.653325+0000 mon.c (mon.1) 27 : audit [INF] from='client.? 192.168.123.107:0/668475222' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/142611623"}]: dispatch 2026-03-09T14:33:18.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:18 vm07 bash[17480]: audit 2026-03-09T14:33:17.653764+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/142611623"}]: dispatch 2026-03-09T14:33:18.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:18 vm11 bash[17885]: cluster 2026-03-09T14:33:17.014828+0000 mgr.y (mgr.24310) 74 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 456 KiB data, 80 MiB used, 160 GiB / 160 GiB avail; 274 KiB/s rd, 4.8 KiB/s wr, 474 op/s 2026-03-09T14:33:18.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:18 vm11 bash[17885]: audit 2026-03-09T14:33:17.463721+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.107:0/570569823' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:33:18.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:18 vm11 bash[17885]: audit 2026-03-09T14:33:17.653325+0000 mon.c (mon.1) 27 : audit [INF] from='client.? 192.168.123.107:0/668475222' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/142611623"}]: dispatch 2026-03-09T14:33:18.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:18 vm11 bash[17885]: audit 2026-03-09T14:33:17.653764+0000 mon.a (mon.0) 692 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/142611623"}]: dispatch 2026-03-09T14:33:19.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:19 vm07 bash[22585]: audit 2026-03-09T14:33:18.082837+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/142611623"}]': finished 2026-03-09T14:33:19.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:19 vm07 bash[22585]: cluster 2026-03-09T14:33:18.083001+0000 mon.a (mon.0) 694 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T14:33:19.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:19 vm07 bash[22585]: audit 2026-03-09T14:33:18.143724+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:19.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:19 vm07 bash[22585]: audit 2026-03-09T14:33:18.301417+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.107:0/2843735942' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3123223907"}]: dispatch 2026-03-09T14:33:19.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:19 vm07 bash[22585]: cluster 2026-03-09T14:33:18.937661+0000 mon.a (mon.0) 697 : cluster [DBG] mgrmap e21: y(active, since 56s), standbys: x 2026-03-09T14:33:19.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:19 vm07 bash[17480]: audit 2026-03-09T14:33:18.082837+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/142611623"}]': finished 2026-03-09T14:33:19.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:19 vm07 bash[17480]: cluster 2026-03-09T14:33:18.083001+0000 mon.a (mon.0) 694 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T14:33:19.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:19 vm07 bash[17480]: audit 2026-03-09T14:33:18.143724+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:19.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:19 vm07 bash[17480]: audit 2026-03-09T14:33:18.301417+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.107:0/2843735942' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3123223907"}]: dispatch 2026-03-09T14:33:19.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:19 vm07 bash[17480]: cluster 2026-03-09T14:33:18.937661+0000 mon.a (mon.0) 697 : cluster [DBG] mgrmap e21: y(active, since 56s), standbys: x 2026-03-09T14:33:19.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:19 vm11 bash[17885]: audit 2026-03-09T14:33:18.082837+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/142611623"}]': finished 2026-03-09T14:33:19.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:19 vm11 bash[17885]: cluster 2026-03-09T14:33:18.083001+0000 mon.a (mon.0) 694 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-09T14:33:19.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:19 vm11 bash[17885]: audit 2026-03-09T14:33:18.143724+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:19.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:19 vm11 bash[17885]: audit 2026-03-09T14:33:18.301417+0000 mon.a (mon.0) 696 : audit [INF] from='client.? 192.168.123.107:0/2843735942' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3123223907"}]: dispatch 2026-03-09T14:33:19.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:19 vm11 bash[17885]: cluster 2026-03-09T14:33:18.937661+0000 mon.a (mon.0) 697 : cluster [DBG] mgrmap e21: y(active, since 56s), standbys: x 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:20 vm07 bash[22585]: cluster 2026-03-09T14:33:19.015155+0000 mgr.y (mgr.24310) 75 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 456 KiB data, 80 MiB used, 160 GiB / 160 GiB avail; 272 KiB/s rd, 4.7 KiB/s wr, 471 op/s 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:20 vm07 bash[22585]: audit 2026-03-09T14:33:19.154096+0000 mon.a (mon.0) 698 : audit [INF] from='client.? 192.168.123.107:0/2843735942' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3123223907"}]': finished 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:20 vm07 bash[22585]: cluster 2026-03-09T14:33:19.154316+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:20 vm07 bash[22585]: audit 2026-03-09T14:33:19.332697+0000 mon.c (mon.1) 28 : audit [INF] from='client.? 192.168.123.107:0/1670381899' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3123223907"}]: dispatch 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:20 vm07 bash[22585]: audit 2026-03-09T14:33:19.333236+0000 mon.a (mon.0) 700 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3123223907"}]: dispatch 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:20 vm07 bash[22585]: audit 2026-03-09T14:33:20.038907+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:20 vm07 bash[22585]: audit 2026-03-09T14:33:20.123122+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:20 vm07 bash[22585]: audit 2026-03-09T14:33:20.128911+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:20 vm07 bash[22585]: audit 2026-03-09T14:33:20.135423+0000 mon.b (mon.2) 83 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:20 vm07 bash[22585]: audit 2026-03-09T14:33:20.137037+0000 mon.b (mon.2) 84 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:20 vm07 bash[17480]: cluster 2026-03-09T14:33:19.015155+0000 mgr.y (mgr.24310) 75 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 456 KiB data, 80 MiB used, 160 GiB / 160 GiB avail; 272 KiB/s rd, 4.7 KiB/s wr, 471 op/s 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:20 vm07 bash[17480]: audit 2026-03-09T14:33:19.154096+0000 mon.a (mon.0) 698 : audit [INF] from='client.? 192.168.123.107:0/2843735942' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3123223907"}]': finished 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:20 vm07 bash[17480]: cluster 2026-03-09T14:33:19.154316+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:20 vm07 bash[17480]: audit 2026-03-09T14:33:19.332697+0000 mon.c (mon.1) 28 : audit [INF] from='client.? 192.168.123.107:0/1670381899' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3123223907"}]: dispatch 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:20 vm07 bash[17480]: audit 2026-03-09T14:33:19.333236+0000 mon.a (mon.0) 700 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3123223907"}]: dispatch 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:20 vm07 bash[17480]: audit 2026-03-09T14:33:20.038907+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:20 vm07 bash[17480]: audit 2026-03-09T14:33:20.123122+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:20 vm07 bash[17480]: audit 2026-03-09T14:33:20.128911+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:20 vm07 bash[17480]: audit 2026-03-09T14:33:20.135423+0000 mon.b (mon.2) 83 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:33:20.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:20 vm07 bash[17480]: audit 2026-03-09T14:33:20.137037+0000 mon.b (mon.2) 84 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:33:20.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:20 vm11 bash[17885]: cluster 2026-03-09T14:33:19.015155+0000 mgr.y (mgr.24310) 75 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 456 KiB data, 80 MiB used, 160 GiB / 160 GiB avail; 272 KiB/s rd, 4.7 KiB/s wr, 471 op/s 2026-03-09T14:33:20.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:20 vm11 bash[17885]: audit 2026-03-09T14:33:19.154096+0000 mon.a (mon.0) 698 : audit [INF] from='client.? 192.168.123.107:0/2843735942' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3123223907"}]': finished 2026-03-09T14:33:20.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:20 vm11 bash[17885]: cluster 2026-03-09T14:33:19.154316+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-09T14:33:20.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:20 vm11 bash[17885]: audit 2026-03-09T14:33:19.332697+0000 mon.c (mon.1) 28 : audit [INF] from='client.? 192.168.123.107:0/1670381899' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3123223907"}]: dispatch 2026-03-09T14:33:20.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:20 vm11 bash[17885]: audit 2026-03-09T14:33:19.333236+0000 mon.a (mon.0) 700 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3123223907"}]: dispatch 2026-03-09T14:33:20.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:20 vm11 bash[17885]: audit 2026-03-09T14:33:20.038907+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:20.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:20 vm11 bash[17885]: audit 2026-03-09T14:33:20.123122+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:20.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:20 vm11 bash[17885]: audit 2026-03-09T14:33:20.128911+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:20.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:20 vm11 bash[17885]: audit 2026-03-09T14:33:20.135423+0000 mon.b (mon.2) 83 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:33:20.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:20 vm11 bash[17885]: audit 2026-03-09T14:33:20.137037+0000 mon.b (mon.2) 84 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:33:21.412 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.136234+0000 mgr.y (mgr.24310) 76 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: cephadm 2026-03-09T14:33:20.137111+0000 mgr.y (mgr.24310) 77 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.137609+0000 mgr.y (mgr.24310) 78 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.143945+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.161154+0000 mon.b (mon.2) 85 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.161928+0000 mgr.y (mgr.24310) 79 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.167494+0000 mon.a (mon.0) 705 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3123223907"}]': finished 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: cluster 2026-03-09T14:33:20.167661+0000 mon.a (mon.0) 706 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.177616+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.181611+0000 mon.b (mon.2) 86 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.186174+0000 mon.b (mon.2) 87 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.392701+0000 mon.c (mon.1) 29 : audit [INF] from='client.? 192.168.123.107:0/3663697965' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1711405865"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:21 vm07 bash[22585]: audit 2026-03-09T14:33:20.393113+0000 mon.a (mon.0) 708 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1711405865"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.136234+0000 mgr.y (mgr.24310) 76 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: cephadm 2026-03-09T14:33:20.137111+0000 mgr.y (mgr.24310) 77 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.137609+0000 mgr.y (mgr.24310) 78 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.143945+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.161154+0000 mon.b (mon.2) 85 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.161928+0000 mgr.y (mgr.24310) 79 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.167494+0000 mon.a (mon.0) 705 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3123223907"}]': finished 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: cluster 2026-03-09T14:33:20.167661+0000 mon.a (mon.0) 706 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.177616+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.181611+0000 mon.b (mon.2) 86 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.186174+0000 mon.b (mon.2) 87 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.392701+0000 mon.c (mon.1) 29 : audit [INF] from='client.? 192.168.123.107:0/3663697965' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1711405865"}]: dispatch 2026-03-09T14:33:21.413 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:21 vm07 bash[17480]: audit 2026-03-09T14:33:20.393113+0000 mon.a (mon.0) 708 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1711405865"}]: dispatch 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.136234+0000 mgr.y (mgr.24310) 76 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: cephadm 2026-03-09T14:33:20.137111+0000 mgr.y (mgr.24310) 77 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.137609+0000 mgr.y (mgr.24310) 78 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.143945+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.161154+0000 mon.b (mon.2) 85 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.161928+0000 mgr.y (mgr.24310) 79 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.167494+0000 mon.a (mon.0) 705 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3123223907"}]': finished 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: cluster 2026-03-09T14:33:20.167661+0000 mon.a (mon.0) 706 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.177616+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.181611+0000 mon.b (mon.2) 86 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.186174+0000 mon.b (mon.2) 87 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.392701+0000 mon.c (mon.1) 29 : audit [INF] from='client.? 192.168.123.107:0/3663697965' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1711405865"}]: dispatch 2026-03-09T14:33:21.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:21 vm11 bash[17885]: audit 2026-03-09T14:33:20.393113+0000 mon.a (mon.0) 708 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1711405865"}]: dispatch 2026-03-09T14:33:22.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:22 vm11 bash[17885]: cluster 2026-03-09T14:33:21.015504+0000 mgr.y (mgr.24310) 80 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 47 KiB/s rd, 570 B/s wr, 70 op/s 2026-03-09T14:33:22.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:22 vm11 bash[17885]: audit 2026-03-09T14:33:21.185981+0000 mon.a (mon.0) 709 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1711405865"}]': finished 2026-03-09T14:33:22.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:22 vm11 bash[17885]: cluster 2026-03-09T14:33:21.186044+0000 mon.a (mon.0) 710 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T14:33:22.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:22 vm11 bash[17885]: audit 2026-03-09T14:33:21.391654+0000 mon.a (mon.0) 711 : audit [INF] from='client.? 192.168.123.107:0/4008394036' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2850874332"}]: dispatch 2026-03-09T14:33:22.534 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:22 vm07 bash[22585]: cluster 2026-03-09T14:33:21.015504+0000 mgr.y (mgr.24310) 80 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 47 KiB/s rd, 570 B/s wr, 70 op/s 2026-03-09T14:33:22.534 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:22 vm07 bash[22585]: audit 2026-03-09T14:33:21.185981+0000 mon.a (mon.0) 709 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1711405865"}]': finished 2026-03-09T14:33:22.534 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:22 vm07 bash[22585]: cluster 2026-03-09T14:33:21.186044+0000 mon.a (mon.0) 710 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T14:33:22.534 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:22 vm07 bash[22585]: audit 2026-03-09T14:33:21.391654+0000 mon.a (mon.0) 711 : audit [INF] from='client.? 192.168.123.107:0/4008394036' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2850874332"}]: dispatch 2026-03-09T14:33:22.534 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:22 vm07 bash[17480]: cluster 2026-03-09T14:33:21.015504+0000 mgr.y (mgr.24310) 80 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 47 KiB/s rd, 570 B/s wr, 70 op/s 2026-03-09T14:33:22.534 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:22 vm07 bash[17480]: audit 2026-03-09T14:33:21.185981+0000 mon.a (mon.0) 709 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1711405865"}]': finished 2026-03-09T14:33:22.534 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:22 vm07 bash[17480]: cluster 2026-03-09T14:33:21.186044+0000 mon.a (mon.0) 710 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-09T14:33:22.534 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:22 vm07 bash[17480]: audit 2026-03-09T14:33:21.391654+0000 mon.a (mon.0) 711 : audit [INF] from='client.? 192.168.123.107:0/4008394036' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2850874332"}]: dispatch 2026-03-09T14:33:22.912 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:22 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:22] "GET /metrics HTTP/1.1" 200 197459 "" "Prometheus/2.33.4" 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:22.190516+0000 mon.a (mon.0) 712 : audit [INF] from='client.? 192.168.123.107:0/4008394036' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2850874332"}]': finished 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: cluster 2026-03-09T14:33:22.191756+0000 mon.a (mon.0) 713 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:22.396847+0000 mon.c (mon.1) 30 : audit [INF] from='client.? 192.168.123.107:0/1486742671' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1354642186"}]: dispatch 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:22.397314+0000 mon.a (mon.0) 714 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1354642186"}]: dispatch 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.027361+0000 mon.b (mon.2) 88 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.027600+0000 mon.b (mon.2) 89 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.027798+0000 mon.b (mon.2) 90 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.028001+0000 mon.b (mon.2) 91 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.029008+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.029505+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.030119+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.506 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.031241+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.088661+0000 mon.b (mon.2) 92 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.089988+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.107213+0000 mon.b (mon.2) 93 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:23 vm07 bash[22585]: audit 2026-03-09T14:33:23.108416+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:22.190516+0000 mon.a (mon.0) 712 : audit [INF] from='client.? 192.168.123.107:0/4008394036' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2850874332"}]': finished 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: cluster 2026-03-09T14:33:22.191756+0000 mon.a (mon.0) 713 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:22.396847+0000 mon.c (mon.1) 30 : audit [INF] from='client.? 192.168.123.107:0/1486742671' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1354642186"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:22.397314+0000 mon.a (mon.0) 714 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1354642186"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.027361+0000 mon.b (mon.2) 88 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.027600+0000 mon.b (mon.2) 89 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.027798+0000 mon.b (mon.2) 90 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.028001+0000 mon.b (mon.2) 91 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.029008+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.029505+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.030119+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.031241+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.088661+0000 mon.b (mon.2) 92 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.089988+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.107213+0000 mon.b (mon.2) 93 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[17480]: audit 2026-03-09T14:33:23.108416+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:33:23 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:23] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:22.190516+0000 mon.a (mon.0) 712 : audit [INF] from='client.? 192.168.123.107:0/4008394036' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2850874332"}]': finished 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: cluster 2026-03-09T14:33:22.191756+0000 mon.a (mon.0) 713 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:22.396847+0000 mon.c (mon.1) 30 : audit [INF] from='client.? 192.168.123.107:0/1486742671' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1354642186"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:22.397314+0000 mon.a (mon.0) 714 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1354642186"}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.027361+0000 mon.b (mon.2) 88 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.027600+0000 mon.b (mon.2) 89 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.027798+0000 mon.b (mon.2) 90 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.028001+0000 mon.b (mon.2) 91 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.029008+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.029505+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-09T14:33:23.507 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.030119+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.031241+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-09T14:33:23.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.088661+0000 mon.b (mon.2) 92 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:33:23.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.089988+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:33:23.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.107213+0000 mon.b (mon.2) 93 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:33:23.508 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:23 vm11 bash[17885]: audit 2026-03-09T14:33:23.108416+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:33:23.913 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[42609]: level=error ts=2026-03-09T14:33:23.506Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:33:23.913 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[42609]: level=warn ts=2026-03-09T14:33:23.508Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:33:23.913 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:23 vm07 bash[42609]: level=warn ts=2026-03-09T14:33:23.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:33:24.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: cluster 2026-03-09T14:33:23.015868+0000 mgr.y (mgr.24310) 81 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 767 B/s wr, 94 op/s 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: audit 2026-03-09T14:33:23.220161+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1354642186"}]': finished 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: audit 2026-03-09T14:33:23.220354+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]': finished 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: audit 2026-03-09T14:33:23.220470+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]': finished 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: audit 2026-03-09T14:33:23.220626+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]': finished 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: audit 2026-03-09T14:33:23.220830+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]': finished 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: cluster 2026-03-09T14:33:23.221299+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: audit 2026-03-09T14:33:23.323956+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: audit 2026-03-09T14:33:23.457853+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: cephadm 2026-03-09T14:33:23.462745+0000 mgr.y (mgr.24310) 82 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:24 vm07 bash[22585]: audit 2026-03-09T14:33:23.483307+0000 mon.a (mon.0) 729 : audit [INF] from='client.? 192.168.123.107:0/1537704454' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2030379457"}]: dispatch 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: cluster 2026-03-09T14:33:23.015868+0000 mgr.y (mgr.24310) 81 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 767 B/s wr, 94 op/s 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: audit 2026-03-09T14:33:23.220161+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1354642186"}]': finished 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: audit 2026-03-09T14:33:23.220354+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]': finished 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: audit 2026-03-09T14:33:23.220470+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]': finished 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: audit 2026-03-09T14:33:23.220626+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]': finished 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: audit 2026-03-09T14:33:23.220830+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]': finished 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: cluster 2026-03-09T14:33:23.221299+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: audit 2026-03-09T14:33:23.323956+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: audit 2026-03-09T14:33:23.457853+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: cephadm 2026-03-09T14:33:23.462745+0000 mgr.y (mgr.24310) 82 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:33:24.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:24 vm07 bash[17480]: audit 2026-03-09T14:33:23.483307+0000 mon.a (mon.0) 729 : audit [INF] from='client.? 192.168.123.107:0/1537704454' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2030379457"}]: dispatch 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: cluster 2026-03-09T14:33:23.015868+0000 mgr.y (mgr.24310) 81 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 94 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 767 B/s wr, 94 op/s 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: audit 2026-03-09T14:33:23.220161+0000 mon.a (mon.0) 721 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1354642186"}]': finished 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: audit 2026-03-09T14:33:23.220354+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 2]}]': finished 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: audit 2026-03-09T14:33:23.220470+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]': finished 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: audit 2026-03-09T14:33:23.220626+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.d", "id": [1, 5]}]': finished 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: audit 2026-03-09T14:33:23.220830+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]': finished 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: cluster 2026-03-09T14:33:23.221299+0000 mon.a (mon.0) 726 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: audit 2026-03-09T14:33:23.323956+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: audit 2026-03-09T14:33:23.457853+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: cephadm 2026-03-09T14:33:23.462745+0000 mgr.y (mgr.24310) 82 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:33:24.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:24 vm11 bash[17885]: audit 2026-03-09T14:33:23.483307+0000 mon.a (mon.0) 729 : audit [INF] from='client.? 192.168.123.107:0/1537704454' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2030379457"}]: dispatch 2026-03-09T14:33:25.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:25 vm07 bash[22585]: audit 2026-03-09T14:33:24.254718+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.107:0/1537704454' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2030379457"}]': finished 2026-03-09T14:33:25.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:25 vm07 bash[22585]: cluster 2026-03-09T14:33:24.254941+0000 mon.a (mon.0) 731 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T14:33:25.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:25 vm07 bash[22585]: audit 2026-03-09T14:33:24.506116+0000 mon.a (mon.0) 732 : audit [INF] from='client.? 192.168.123.107:0/1221409611' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/735153467"}]: dispatch 2026-03-09T14:33:25.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:25 vm07 bash[22585]: audit 2026-03-09T14:33:24.889911+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:25.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:25 vm07 bash[17480]: audit 2026-03-09T14:33:24.254718+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.107:0/1537704454' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2030379457"}]': finished 2026-03-09T14:33:25.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:25 vm07 bash[17480]: cluster 2026-03-09T14:33:24.254941+0000 mon.a (mon.0) 731 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T14:33:25.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:25 vm07 bash[17480]: audit 2026-03-09T14:33:24.506116+0000 mon.a (mon.0) 732 : audit [INF] from='client.? 192.168.123.107:0/1221409611' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/735153467"}]: dispatch 2026-03-09T14:33:25.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:25 vm07 bash[17480]: audit 2026-03-09T14:33:24.889911+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:25.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:25 vm11 bash[17885]: audit 2026-03-09T14:33:24.254718+0000 mon.a (mon.0) 730 : audit [INF] from='client.? 192.168.123.107:0/1537704454' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2030379457"}]': finished 2026-03-09T14:33:25.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:25 vm11 bash[17885]: cluster 2026-03-09T14:33:24.254941+0000 mon.a (mon.0) 731 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-09T14:33:25.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:25 vm11 bash[17885]: audit 2026-03-09T14:33:24.506116+0000 mon.a (mon.0) 732 : audit [INF] from='client.? 192.168.123.107:0/1221409611' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/735153467"}]: dispatch 2026-03-09T14:33:25.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:25 vm11 bash[17885]: audit 2026-03-09T14:33:24.889911+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:33:26.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:26 vm07 bash[22585]: cluster 2026-03-09T14:33:25.016291+0000 mgr.y (mgr.24310) 83 : cluster [DBG] pgmap v57: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:33:26.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:26 vm07 bash[22585]: audit 2026-03-09T14:33:25.289524+0000 mon.a (mon.0) 734 : audit [INF] from='client.? 192.168.123.107:0/1221409611' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/735153467"}]': finished 2026-03-09T14:33:26.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:26 vm07 bash[22585]: cluster 2026-03-09T14:33:25.289638+0000 mon.a (mon.0) 735 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T14:33:26.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:26 vm07 bash[22585]: audit 2026-03-09T14:33:25.481086+0000 mon.a (mon.0) 736 : audit [INF] from='client.? 192.168.123.107:0/1477709499' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2859869932"}]: dispatch 2026-03-09T14:33:26.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:26 vm07 bash[17480]: cluster 2026-03-09T14:33:25.016291+0000 mgr.y (mgr.24310) 83 : cluster [DBG] pgmap v57: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:33:26.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:26 vm07 bash[17480]: audit 2026-03-09T14:33:25.289524+0000 mon.a (mon.0) 734 : audit [INF] from='client.? 192.168.123.107:0/1221409611' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/735153467"}]': finished 2026-03-09T14:33:26.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:26 vm07 bash[17480]: cluster 2026-03-09T14:33:25.289638+0000 mon.a (mon.0) 735 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T14:33:26.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:26 vm07 bash[17480]: audit 2026-03-09T14:33:25.481086+0000 mon.a (mon.0) 736 : audit [INF] from='client.? 192.168.123.107:0/1477709499' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2859869932"}]: dispatch 2026-03-09T14:33:26.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:26 vm11 bash[17885]: cluster 2026-03-09T14:33:25.016291+0000 mgr.y (mgr.24310) 83 : cluster [DBG] pgmap v57: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:33:26.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:26 vm11 bash[17885]: audit 2026-03-09T14:33:25.289524+0000 mon.a (mon.0) 734 : audit [INF] from='client.? 192.168.123.107:0/1221409611' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/735153467"}]': finished 2026-03-09T14:33:26.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:26 vm11 bash[17885]: cluster 2026-03-09T14:33:25.289638+0000 mon.a (mon.0) 735 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-09T14:33:26.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:26 vm11 bash[17885]: audit 2026-03-09T14:33:25.481086+0000 mon.a (mon.0) 736 : audit [INF] from='client.? 192.168.123.107:0/1477709499' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2859869932"}]: dispatch 2026-03-09T14:33:27.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:27 vm07 bash[22585]: audit 2026-03-09T14:33:26.339195+0000 mon.a (mon.0) 737 : audit [INF] from='client.? 192.168.123.107:0/1477709499' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2859869932"}]': finished 2026-03-09T14:33:27.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:27 vm07 bash[22585]: cluster 2026-03-09T14:33:26.339328+0000 mon.a (mon.0) 738 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T14:33:27.663 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:27 vm07 bash[22585]: audit 2026-03-09T14:33:26.531567+0000 mon.a (mon.0) 739 : audit [INF] from='client.? 192.168.123.107:0/3170135201' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1327540493"}]: dispatch 2026-03-09T14:33:27.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:27 vm07 bash[17480]: audit 2026-03-09T14:33:26.339195+0000 mon.a (mon.0) 737 : audit [INF] from='client.? 192.168.123.107:0/1477709499' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2859869932"}]': finished 2026-03-09T14:33:27.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:27 vm07 bash[17480]: cluster 2026-03-09T14:33:26.339328+0000 mon.a (mon.0) 738 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T14:33:27.663 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:27 vm07 bash[17480]: audit 2026-03-09T14:33:26.531567+0000 mon.a (mon.0) 739 : audit [INF] from='client.? 192.168.123.107:0/3170135201' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1327540493"}]: dispatch 2026-03-09T14:33:27.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:27 vm11 bash[17885]: audit 2026-03-09T14:33:26.339195+0000 mon.a (mon.0) 737 : audit [INF] from='client.? 192.168.123.107:0/1477709499' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2859869932"}]': finished 2026-03-09T14:33:27.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:27 vm11 bash[17885]: cluster 2026-03-09T14:33:26.339328+0000 mon.a (mon.0) 738 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-09T14:33:27.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:27 vm11 bash[17885]: audit 2026-03-09T14:33:26.531567+0000 mon.a (mon.0) 739 : audit [INF] from='client.? 192.168.123.107:0/3170135201' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1327540493"}]: dispatch 2026-03-09T14:33:28.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:28 vm11 bash[17885]: cluster 2026-03-09T14:33:27.016742+0000 mgr.y (mgr.24310) 84 : cluster [DBG] pgmap v60: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:33:28.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:28 vm11 bash[17885]: audit 2026-03-09T14:33:27.302133+0000 mgr.y (mgr.24310) 85 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:28.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:28 vm11 bash[17885]: audit 2026-03-09T14:33:27.486722+0000 mon.a (mon.0) 740 : audit [INF] from='client.? 192.168.123.107:0/3170135201' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1327540493"}]': finished 2026-03-09T14:33:28.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:28 vm11 bash[17885]: cluster 2026-03-09T14:33:27.487798+0000 mon.a (mon.0) 741 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T14:33:28.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:28 vm11 bash[17885]: audit 2026-03-09T14:33:27.718820+0000 mon.a (mon.0) 742 : audit [INF] from='client.? 192.168.123.107:0/2334918776' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561120863"}]: dispatch 2026-03-09T14:33:28.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:28 vm07 bash[22585]: cluster 2026-03-09T14:33:27.016742+0000 mgr.y (mgr.24310) 84 : cluster [DBG] pgmap v60: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:33:28.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:28 vm07 bash[22585]: audit 2026-03-09T14:33:27.302133+0000 mgr.y (mgr.24310) 85 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:28.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:28 vm07 bash[22585]: audit 2026-03-09T14:33:27.486722+0000 mon.a (mon.0) 740 : audit [INF] from='client.? 192.168.123.107:0/3170135201' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1327540493"}]': finished 2026-03-09T14:33:28.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:28 vm07 bash[22585]: cluster 2026-03-09T14:33:27.487798+0000 mon.a (mon.0) 741 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T14:33:28.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:28 vm07 bash[22585]: audit 2026-03-09T14:33:27.718820+0000 mon.a (mon.0) 742 : audit [INF] from='client.? 192.168.123.107:0/2334918776' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561120863"}]: dispatch 2026-03-09T14:33:28.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:28 vm07 bash[17480]: cluster 2026-03-09T14:33:27.016742+0000 mgr.y (mgr.24310) 84 : cluster [DBG] pgmap v60: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2 op/s 2026-03-09T14:33:28.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:28 vm07 bash[17480]: audit 2026-03-09T14:33:27.302133+0000 mgr.y (mgr.24310) 85 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:28.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:28 vm07 bash[17480]: audit 2026-03-09T14:33:27.486722+0000 mon.a (mon.0) 740 : audit [INF] from='client.? 192.168.123.107:0/3170135201' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1327540493"}]': finished 2026-03-09T14:33:28.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:28 vm07 bash[17480]: cluster 2026-03-09T14:33:27.487798+0000 mon.a (mon.0) 741 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-09T14:33:28.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:28 vm07 bash[17480]: audit 2026-03-09T14:33:27.718820+0000 mon.a (mon.0) 742 : audit [INF] from='client.? 192.168.123.107:0/2334918776' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561120863"}]: dispatch 2026-03-09T14:33:29.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:29 vm11 bash[17885]: audit 2026-03-09T14:33:28.493774+0000 mon.a (mon.0) 743 : audit [INF] from='client.? 192.168.123.107:0/2334918776' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561120863"}]': finished 2026-03-09T14:33:29.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:29 vm11 bash[17885]: cluster 2026-03-09T14:33:28.493980+0000 mon.a (mon.0) 744 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T14:33:29.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:29 vm11 bash[17885]: audit 2026-03-09T14:33:28.675985+0000 mon.c (mon.1) 31 : audit [INF] from='client.? 192.168.123.107:0/195252223' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/735153467"}]: dispatch 2026-03-09T14:33:29.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:29 vm11 bash[17885]: audit 2026-03-09T14:33:28.676459+0000 mon.a (mon.0) 745 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/735153467"}]: dispatch 2026-03-09T14:33:29.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:29 vm07 bash[22585]: audit 2026-03-09T14:33:28.493774+0000 mon.a (mon.0) 743 : audit [INF] from='client.? 192.168.123.107:0/2334918776' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561120863"}]': finished 2026-03-09T14:33:29.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:29 vm07 bash[22585]: cluster 2026-03-09T14:33:28.493980+0000 mon.a (mon.0) 744 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T14:33:29.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:29 vm07 bash[22585]: audit 2026-03-09T14:33:28.675985+0000 mon.c (mon.1) 31 : audit [INF] from='client.? 192.168.123.107:0/195252223' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/735153467"}]: dispatch 2026-03-09T14:33:29.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:29 vm07 bash[22585]: audit 2026-03-09T14:33:28.676459+0000 mon.a (mon.0) 745 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/735153467"}]: dispatch 2026-03-09T14:33:29.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:29 vm07 bash[17480]: audit 2026-03-09T14:33:28.493774+0000 mon.a (mon.0) 743 : audit [INF] from='client.? 192.168.123.107:0/2334918776' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561120863"}]': finished 2026-03-09T14:33:29.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:29 vm07 bash[17480]: cluster 2026-03-09T14:33:28.493980+0000 mon.a (mon.0) 744 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-09T14:33:29.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:29 vm07 bash[17480]: audit 2026-03-09T14:33:28.675985+0000 mon.c (mon.1) 31 : audit [INF] from='client.? 192.168.123.107:0/195252223' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/735153467"}]: dispatch 2026-03-09T14:33:29.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:29 vm07 bash[17480]: audit 2026-03-09T14:33:28.676459+0000 mon.a (mon.0) 745 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/735153467"}]: dispatch 2026-03-09T14:33:30.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:30 vm11 bash[17885]: cluster 2026-03-09T14:33:29.017135+0000 mgr.y (mgr.24310) 86 : cluster [DBG] pgmap v63: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:33:30.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:30 vm11 bash[17885]: audit 2026-03-09T14:33:29.515000+0000 mon.a (mon.0) 746 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/735153467"}]': finished 2026-03-09T14:33:30.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:30 vm11 bash[17885]: cluster 2026-03-09T14:33:29.515083+0000 mon.a (mon.0) 747 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T14:33:30.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:30 vm11 bash[17885]: audit 2026-03-09T14:33:29.715214+0000 mon.a (mon.0) 748 : audit [INF] from='client.? 192.168.123.107:0/4087429618' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1541928502"}]: dispatch 2026-03-09T14:33:30.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:30 vm07 bash[17480]: cluster 2026-03-09T14:33:29.017135+0000 mgr.y (mgr.24310) 86 : cluster [DBG] pgmap v63: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:33:30.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:30 vm07 bash[17480]: audit 2026-03-09T14:33:29.515000+0000 mon.a (mon.0) 746 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/735153467"}]': finished 2026-03-09T14:33:30.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:30 vm07 bash[17480]: cluster 2026-03-09T14:33:29.515083+0000 mon.a (mon.0) 747 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T14:33:30.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:30 vm07 bash[17480]: audit 2026-03-09T14:33:29.715214+0000 mon.a (mon.0) 748 : audit [INF] from='client.? 192.168.123.107:0/4087429618' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1541928502"}]: dispatch 2026-03-09T14:33:30.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:30 vm07 bash[22585]: cluster 2026-03-09T14:33:29.017135+0000 mgr.y (mgr.24310) 86 : cluster [DBG] pgmap v63: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:33:30.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:30 vm07 bash[22585]: audit 2026-03-09T14:33:29.515000+0000 mon.a (mon.0) 746 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/735153467"}]': finished 2026-03-09T14:33:30.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:30 vm07 bash[22585]: cluster 2026-03-09T14:33:29.515083+0000 mon.a (mon.0) 747 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-09T14:33:30.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:30 vm07 bash[22585]: audit 2026-03-09T14:33:29.715214+0000 mon.a (mon.0) 748 : audit [INF] from='client.? 192.168.123.107:0/4087429618' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1541928502"}]: dispatch 2026-03-09T14:33:31.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:31 vm07 bash[17480]: audit 2026-03-09T14:33:30.521883+0000 mon.a (mon.0) 749 : audit [INF] from='client.? 192.168.123.107:0/4087429618' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1541928502"}]': finished 2026-03-09T14:33:31.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:31 vm07 bash[17480]: cluster 2026-03-09T14:33:30.524478+0000 mon.a (mon.0) 750 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T14:33:31.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:31 vm07 bash[17480]: audit 2026-03-09T14:33:30.724147+0000 mon.c (mon.1) 32 : audit [INF] from='client.? 192.168.123.107:0/516812873' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3454294218"}]: dispatch 2026-03-09T14:33:31.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:31 vm07 bash[17480]: audit 2026-03-09T14:33:30.724571+0000 mon.a (mon.0) 751 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3454294218"}]: dispatch 2026-03-09T14:33:31.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:31 vm07 bash[22585]: audit 2026-03-09T14:33:30.521883+0000 mon.a (mon.0) 749 : audit [INF] from='client.? 192.168.123.107:0/4087429618' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1541928502"}]': finished 2026-03-09T14:33:31.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:31 vm07 bash[22585]: cluster 2026-03-09T14:33:30.524478+0000 mon.a (mon.0) 750 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T14:33:31.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:31 vm07 bash[22585]: audit 2026-03-09T14:33:30.724147+0000 mon.c (mon.1) 32 : audit [INF] from='client.? 192.168.123.107:0/516812873' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3454294218"}]: dispatch 2026-03-09T14:33:31.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:31 vm07 bash[22585]: audit 2026-03-09T14:33:30.724571+0000 mon.a (mon.0) 751 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3454294218"}]: dispatch 2026-03-09T14:33:32.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:31 vm11 bash[17885]: audit 2026-03-09T14:33:30.521883+0000 mon.a (mon.0) 749 : audit [INF] from='client.? 192.168.123.107:0/4087429618' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1541928502"}]': finished 2026-03-09T14:33:32.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:31 vm11 bash[17885]: cluster 2026-03-09T14:33:30.524478+0000 mon.a (mon.0) 750 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-09T14:33:32.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:31 vm11 bash[17885]: audit 2026-03-09T14:33:30.724147+0000 mon.c (mon.1) 32 : audit [INF] from='client.? 192.168.123.107:0/516812873' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3454294218"}]: dispatch 2026-03-09T14:33:32.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:31 vm11 bash[17885]: audit 2026-03-09T14:33:30.724571+0000 mon.a (mon.0) 751 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3454294218"}]: dispatch 2026-03-09T14:33:32.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:32 vm07 bash[17480]: cluster 2026-03-09T14:33:31.017484+0000 mgr.y (mgr.24310) 87 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 8.2 KiB/s rd, 8 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:33:32.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:32 vm07 bash[17480]: audit 2026-03-09T14:33:31.647378+0000 mon.a (mon.0) 752 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3454294218"}]': finished 2026-03-09T14:33:32.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:32 vm07 bash[17480]: cluster 2026-03-09T14:33:31.647592+0000 mon.a (mon.0) 753 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T14:33:32.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:32 vm07 bash[17480]: audit 2026-03-09T14:33:31.844791+0000 mon.a (mon.0) 754 : audit [INF] from='client.? 192.168.123.107:0/3872218941' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3548929186"}]: dispatch 2026-03-09T14:33:32.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:32 vm07 bash[22585]: cluster 2026-03-09T14:33:31.017484+0000 mgr.y (mgr.24310) 87 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 8.2 KiB/s rd, 8 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:33:32.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:32 vm07 bash[22585]: audit 2026-03-09T14:33:31.647378+0000 mon.a (mon.0) 752 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3454294218"}]': finished 2026-03-09T14:33:32.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:32 vm07 bash[22585]: cluster 2026-03-09T14:33:31.647592+0000 mon.a (mon.0) 753 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T14:33:32.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:32 vm07 bash[22585]: audit 2026-03-09T14:33:31.844791+0000 mon.a (mon.0) 754 : audit [INF] from='client.? 192.168.123.107:0/3872218941' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3548929186"}]: dispatch 2026-03-09T14:33:32.913 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:32 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:32] "GET /metrics HTTP/1.1" 200 214525 "" "Prometheus/2.33.4" 2026-03-09T14:33:33.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:32 vm11 bash[17885]: cluster 2026-03-09T14:33:31.017484+0000 mgr.y (mgr.24310) 87 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 8.2 KiB/s rd, 8 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:33:33.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:32 vm11 bash[17885]: audit 2026-03-09T14:33:31.647378+0000 mon.a (mon.0) 752 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3454294218"}]': finished 2026-03-09T14:33:33.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:32 vm11 bash[17885]: cluster 2026-03-09T14:33:31.647592+0000 mon.a (mon.0) 753 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-09T14:33:33.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:32 vm11 bash[17885]: audit 2026-03-09T14:33:31.844791+0000 mon.a (mon.0) 754 : audit [INF] from='client.? 192.168.123.107:0/3872218941' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3548929186"}]: dispatch 2026-03-09T14:33:33.757 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:33:33 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:33] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:33:33.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:33 vm11 bash[17885]: audit 2026-03-09T14:33:32.655081+0000 mon.a (mon.0) 755 : audit [INF] from='client.? 192.168.123.107:0/3872218941' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3548929186"}]': finished 2026-03-09T14:33:33.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:33 vm11 bash[17885]: cluster 2026-03-09T14:33:32.655149+0000 mon.a (mon.0) 756 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T14:33:33.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:33 vm11 bash[17885]: audit 2026-03-09T14:33:32.842578+0000 mon.c (mon.1) 33 : audit [INF] from='client.? 192.168.123.107:0/826051257' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3548929186"}]: dispatch 2026-03-09T14:33:33.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:33 vm11 bash[17885]: audit 2026-03-09T14:33:32.843132+0000 mon.a (mon.0) 757 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3548929186"}]: dispatch 2026-03-09T14:33:33.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:33 vm07 bash[17480]: audit 2026-03-09T14:33:32.655081+0000 mon.a (mon.0) 755 : audit [INF] from='client.? 192.168.123.107:0/3872218941' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3548929186"}]': finished 2026-03-09T14:33:33.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:33 vm07 bash[17480]: cluster 2026-03-09T14:33:32.655149+0000 mon.a (mon.0) 756 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T14:33:33.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:33 vm07 bash[17480]: audit 2026-03-09T14:33:32.842578+0000 mon.c (mon.1) 33 : audit [INF] from='client.? 192.168.123.107:0/826051257' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3548929186"}]: dispatch 2026-03-09T14:33:33.913 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:33 vm07 bash[17480]: audit 2026-03-09T14:33:32.843132+0000 mon.a (mon.0) 757 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3548929186"}]: dispatch 2026-03-09T14:33:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:33 vm07 bash[22585]: audit 2026-03-09T14:33:32.655081+0000 mon.a (mon.0) 755 : audit [INF] from='client.? 192.168.123.107:0/3872218941' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3548929186"}]': finished 2026-03-09T14:33:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:33 vm07 bash[22585]: cluster 2026-03-09T14:33:32.655149+0000 mon.a (mon.0) 756 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-09T14:33:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:33 vm07 bash[22585]: audit 2026-03-09T14:33:32.842578+0000 mon.c (mon.1) 33 : audit [INF] from='client.? 192.168.123.107:0/826051257' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3548929186"}]: dispatch 2026-03-09T14:33:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:33 vm07 bash[22585]: audit 2026-03-09T14:33:32.843132+0000 mon.a (mon.0) 757 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3548929186"}]: dispatch 2026-03-09T14:33:33.913 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:33 vm07 bash[42609]: level=error ts=2026-03-09T14:33:33.506Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:33:33.913 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:33 vm07 bash[42609]: level=warn ts=2026-03-09T14:33:33.509Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:33:33.913 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:33 vm07 bash[42609]: level=warn ts=2026-03-09T14:33:33.509Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:33:35.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:34 vm11 bash[17885]: cluster 2026-03-09T14:33:33.017867+0000 mgr.y (mgr.24310) 88 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 8.2 KiB/s rd, 8 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:33:35.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:34 vm11 bash[17885]: audit 2026-03-09T14:33:33.671960+0000 mon.a (mon.0) 758 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3548929186"}]': finished 2026-03-09T14:33:35.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:34 vm11 bash[17885]: cluster 2026-03-09T14:33:33.672223+0000 mon.a (mon.0) 759 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T14:33:35.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:34 vm07 bash[22585]: cluster 2026-03-09T14:33:33.017867+0000 mgr.y (mgr.24310) 88 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 8.2 KiB/s rd, 8 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:33:35.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:34 vm07 bash[22585]: audit 2026-03-09T14:33:33.671960+0000 mon.a (mon.0) 758 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3548929186"}]': finished 2026-03-09T14:33:35.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:34 vm07 bash[22585]: cluster 2026-03-09T14:33:33.672223+0000 mon.a (mon.0) 759 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T14:33:35.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:34 vm07 bash[17480]: cluster 2026-03-09T14:33:33.017867+0000 mgr.y (mgr.24310) 88 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 8.2 KiB/s rd, 8 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:33:35.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:34 vm07 bash[17480]: audit 2026-03-09T14:33:33.671960+0000 mon.a (mon.0) 758 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3548929186"}]': finished 2026-03-09T14:33:35.163 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:34 vm07 bash[17480]: cluster 2026-03-09T14:33:33.672223+0000 mon.a (mon.0) 759 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-09T14:33:36.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:36 vm07 bash[17480]: cluster 2026-03-09T14:33:35.018554+0000 mgr.y (mgr.24310) 89 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:33:36.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:36 vm07 bash[22585]: cluster 2026-03-09T14:33:35.018554+0000 mgr.y (mgr.24310) 89 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:33:37.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:36 vm11 bash[17885]: cluster 2026-03-09T14:33:35.018554+0000 mgr.y (mgr.24310) 89 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:33:38.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:38 vm07 bash[17480]: cluster 2026-03-09T14:33:37.019006+0000 mgr.y (mgr.24310) 90 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:38.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:38 vm07 bash[17480]: audit 2026-03-09T14:33:37.312239+0000 mgr.y (mgr.24310) 91 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:38.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:38 vm07 bash[22585]: cluster 2026-03-09T14:33:37.019006+0000 mgr.y (mgr.24310) 90 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:38.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:38 vm07 bash[22585]: audit 2026-03-09T14:33:37.312239+0000 mgr.y (mgr.24310) 91 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:39.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:38 vm11 bash[17885]: cluster 2026-03-09T14:33:37.019006+0000 mgr.y (mgr.24310) 90 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:39.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:38 vm11 bash[17885]: audit 2026-03-09T14:33:37.312239+0000 mgr.y (mgr.24310) 91 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:40.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:40 vm11 bash[17885]: cluster 2026-03-09T14:33:39.019326+0000 mgr.y (mgr.24310) 92 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 694 B/s rd, 0 op/s 2026-03-09T14:33:40.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:40 vm07 bash[22585]: cluster 2026-03-09T14:33:39.019326+0000 mgr.y (mgr.24310) 92 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 694 B/s rd, 0 op/s 2026-03-09T14:33:40.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:40 vm07 bash[17480]: cluster 2026-03-09T14:33:39.019326+0000 mgr.y (mgr.24310) 92 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 694 B/s rd, 0 op/s 2026-03-09T14:33:42.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:42 vm07 bash[22585]: cluster 2026-03-09T14:33:41.020053+0000 mgr.y (mgr.24310) 93 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:33:42.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:42 vm07 bash[17480]: cluster 2026-03-09T14:33:41.020053+0000 mgr.y (mgr.24310) 93 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:33:42.912 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:42 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:42] "GET /metrics HTTP/1.1" 200 214467 "" "Prometheus/2.33.4" 2026-03-09T14:33:43.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:42 vm11 bash[17885]: cluster 2026-03-09T14:33:41.020053+0000 mgr.y (mgr.24310) 93 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:33:43.756 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:33:43 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:43] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:33:43.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:43 vm07 bash[42609]: level=error ts=2026-03-09T14:33:43.507Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:33:43.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:43 vm07 bash[42609]: level=warn ts=2026-03-09T14:33:43.509Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:33:43.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:43 vm07 bash[42609]: level=warn ts=2026-03-09T14:33:43.509Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:33:44.256 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:43 vm11 bash[17885]: cluster 2026-03-09T14:33:43.020360+0000 mgr.y (mgr.24310) 94 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:33:44.412 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:43 vm07 bash[17480]: cluster 2026-03-09T14:33:43.020360+0000 mgr.y (mgr.24310) 94 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:33:44.412 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:43 vm07 bash[22585]: cluster 2026-03-09T14:33:43.020360+0000 mgr.y (mgr.24310) 94 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:33:46.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:46 vm07 bash[22585]: cluster 2026-03-09T14:33:45.021034+0000 mgr.y (mgr.24310) 95 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:33:46.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:46 vm07 bash[17480]: cluster 2026-03-09T14:33:45.021034+0000 mgr.y (mgr.24310) 95 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:33:47.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:46 vm11 bash[17885]: cluster 2026-03-09T14:33:45.021034+0000 mgr.y (mgr.24310) 95 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:33:49.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:48 vm11 bash[17885]: cluster 2026-03-09T14:33:47.021391+0000 mgr.y (mgr.24310) 96 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:49.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:48 vm11 bash[17885]: audit 2026-03-09T14:33:47.322061+0000 mgr.y (mgr.24310) 97 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:49.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:48 vm07 bash[17480]: cluster 2026-03-09T14:33:47.021391+0000 mgr.y (mgr.24310) 96 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:49.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:48 vm07 bash[17480]: audit 2026-03-09T14:33:47.322061+0000 mgr.y (mgr.24310) 97 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:49.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:48 vm07 bash[22585]: cluster 2026-03-09T14:33:47.021391+0000 mgr.y (mgr.24310) 96 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:49.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:48 vm07 bash[22585]: audit 2026-03-09T14:33:47.322061+0000 mgr.y (mgr.24310) 97 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:51.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:50 vm11 bash[17885]: cluster 2026-03-09T14:33:49.021731+0000 mgr.y (mgr.24310) 98 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:51.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:50 vm07 bash[22585]: cluster 2026-03-09T14:33:49.021731+0000 mgr.y (mgr.24310) 98 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:51.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:50 vm07 bash[17480]: cluster 2026-03-09T14:33:49.021731+0000 mgr.y (mgr.24310) 98 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:52.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:51 vm11 bash[17885]: cluster 2026-03-09T14:33:51.022237+0000 mgr.y (mgr.24310) 99 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:33:52.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:51 vm07 bash[22585]: cluster 2026-03-09T14:33:51.022237+0000 mgr.y (mgr.24310) 99 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:33:52.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:51 vm07 bash[17480]: cluster 2026-03-09T14:33:51.022237+0000 mgr.y (mgr.24310) 99 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:33:52.912 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:33:52 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:52] "GET /metrics HTTP/1.1" 200 214467 "" "Prometheus/2.33.4" 2026-03-09T14:33:53.756 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:33:53 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:33:53] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:33:53.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:53 vm07 bash[42609]: level=error ts=2026-03-09T14:33:53.508Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:33:53.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:53 vm07 bash[42609]: level=warn ts=2026-03-09T14:33:53.510Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:33:53.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:33:53 vm07 bash[42609]: level=warn ts=2026-03-09T14:33:53.510Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:33:54.412 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:54 vm07 bash[22585]: cluster 2026-03-09T14:33:53.022561+0000 mgr.y (mgr.24310) 100 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:54.412 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:54 vm07 bash[17480]: cluster 2026-03-09T14:33:53.022561+0000 mgr.y (mgr.24310) 100 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:54.506 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:54 vm11 bash[17885]: cluster 2026-03-09T14:33:53.022561+0000 mgr.y (mgr.24310) 100 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:56.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:56 vm07 bash[22585]: cluster 2026-03-09T14:33:55.023178+0000 mgr.y (mgr.24310) 101 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:33:56.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:56 vm07 bash[17480]: cluster 2026-03-09T14:33:55.023178+0000 mgr.y (mgr.24310) 101 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:33:57.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:56 vm11 bash[17885]: cluster 2026-03-09T14:33:55.023178+0000 mgr.y (mgr.24310) 101 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:33:58.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:58 vm07 bash[17480]: cluster 2026-03-09T14:33:57.023504+0000 mgr.y (mgr.24310) 102 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:58.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:33:58 vm07 bash[17480]: audit 2026-03-09T14:33:57.330163+0000 mgr.y (mgr.24310) 103 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:58.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:58 vm07 bash[22585]: cluster 2026-03-09T14:33:57.023504+0000 mgr.y (mgr.24310) 102 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:58.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:33:58 vm07 bash[22585]: audit 2026-03-09T14:33:57.330163+0000 mgr.y (mgr.24310) 103 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:33:59.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:58 vm11 bash[17885]: cluster 2026-03-09T14:33:57.023504+0000 mgr.y (mgr.24310) 102 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:33:59.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:33:58 vm11 bash[17885]: audit 2026-03-09T14:33:57.330163+0000 mgr.y (mgr.24310) 103 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:00.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:00 vm07 bash[22585]: cluster 2026-03-09T14:33:59.023899+0000 mgr.y (mgr.24310) 104 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:00.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:00 vm07 bash[17480]: cluster 2026-03-09T14:33:59.023899+0000 mgr.y (mgr.24310) 104 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:01.007 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:00 vm11 bash[17885]: cluster 2026-03-09T14:33:59.023899+0000 mgr.y (mgr.24310) 104 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:02.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:02 vm07 bash[22585]: cluster 2026-03-09T14:34:01.024478+0000 mgr.y (mgr.24310) 105 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:02.912 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:34:02 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:02] "GET /metrics HTTP/1.1" 200 214452 "" "Prometheus/2.33.4" 2026-03-09T14:34:02.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:02 vm07 bash[17480]: cluster 2026-03-09T14:34:01.024478+0000 mgr.y (mgr.24310) 105 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:03.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:02 vm11 bash[17885]: cluster 2026-03-09T14:34:01.024478+0000 mgr.y (mgr.24310) 105 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:03.756 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:34:03 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:03] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:34:03.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:03 vm07 bash[42609]: level=error ts=2026-03-09T14:34:03.509Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:34:03.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:03 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:03.510Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:34:03.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:03 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:03.511Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:34:04.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:04 vm07 bash[22585]: cluster 2026-03-09T14:34:03.024831+0000 mgr.y (mgr.24310) 106 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:04.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:04 vm07 bash[17480]: cluster 2026-03-09T14:34:03.024831+0000 mgr.y (mgr.24310) 106 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:05.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:04 vm11 bash[17885]: cluster 2026-03-09T14:34:03.024831+0000 mgr.y (mgr.24310) 106 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:07.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:06 vm11 bash[17885]: cluster 2026-03-09T14:34:05.025374+0000 mgr.y (mgr.24310) 107 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:07.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:06 vm07 bash[22585]: cluster 2026-03-09T14:34:05.025374+0000 mgr.y (mgr.24310) 107 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:07.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:06 vm07 bash[17480]: cluster 2026-03-09T14:34:05.025374+0000 mgr.y (mgr.24310) 107 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:09.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:08 vm11 bash[17885]: cluster 2026-03-09T14:34:07.025879+0000 mgr.y (mgr.24310) 108 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:09.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:08 vm11 bash[17885]: audit 2026-03-09T14:34:07.331957+0000 mgr.y (mgr.24310) 109 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:09.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:08 vm07 bash[22585]: cluster 2026-03-09T14:34:07.025879+0000 mgr.y (mgr.24310) 108 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:09.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:08 vm07 bash[22585]: audit 2026-03-09T14:34:07.331957+0000 mgr.y (mgr.24310) 109 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:09.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:08 vm07 bash[17480]: cluster 2026-03-09T14:34:07.025879+0000 mgr.y (mgr.24310) 108 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:09.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:08 vm07 bash[17480]: audit 2026-03-09T14:34:07.331957+0000 mgr.y (mgr.24310) 109 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:11.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:10 vm11 bash[17885]: cluster 2026-03-09T14:34:09.026289+0000 mgr.y (mgr.24310) 110 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:11.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:10 vm07 bash[22585]: cluster 2026-03-09T14:34:09.026289+0000 mgr.y (mgr.24310) 110 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:11.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:10 vm07 bash[17480]: cluster 2026-03-09T14:34:09.026289+0000 mgr.y (mgr.24310) 110 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:12.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:12 vm07 bash[22585]: cluster 2026-03-09T14:34:11.026868+0000 mgr.y (mgr.24310) 111 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:12.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:12 vm07 bash[17480]: cluster 2026-03-09T14:34:11.026868+0000 mgr.y (mgr.24310) 111 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:12.912 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:34:12 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:12] "GET /metrics HTTP/1.1" 200 214397 "" "Prometheus/2.33.4" 2026-03-09T14:34:13.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:12 vm11 bash[17885]: cluster 2026-03-09T14:34:11.026868+0000 mgr.y (mgr.24310) 111 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:13.756 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:34:13 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:13] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:34:13.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:13 vm07 bash[42609]: level=error ts=2026-03-09T14:34:13.509Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:34:13.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:13 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:13.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:34:13.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:13 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:13.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:34:15.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:14 vm11 bash[17885]: cluster 2026-03-09T14:34:13.027220+0000 mgr.y (mgr.24310) 112 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:15.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:14 vm07 bash[17480]: cluster 2026-03-09T14:34:13.027220+0000 mgr.y (mgr.24310) 112 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:15.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:14 vm07 bash[22585]: cluster 2026-03-09T14:34:13.027220+0000 mgr.y (mgr.24310) 112 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:16.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:16 vm07 bash[22585]: cluster 2026-03-09T14:34:15.027842+0000 mgr.y (mgr.24310) 113 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:16.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:16 vm07 bash[17480]: cluster 2026-03-09T14:34:15.027842+0000 mgr.y (mgr.24310) 113 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:17.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:16 vm11 bash[17885]: cluster 2026-03-09T14:34:15.027842+0000 mgr.y (mgr.24310) 113 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:18.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:18 vm07 bash[22585]: cluster 2026-03-09T14:34:17.028328+0000 mgr.y (mgr.24310) 114 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:18.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:18 vm07 bash[22585]: audit 2026-03-09T14:34:17.342137+0000 mgr.y (mgr.24310) 115 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:18.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:18 vm07 bash[17480]: cluster 2026-03-09T14:34:17.028328+0000 mgr.y (mgr.24310) 114 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:18.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:18 vm07 bash[17480]: audit 2026-03-09T14:34:17.342137+0000 mgr.y (mgr.24310) 115 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:19.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:18 vm11 bash[17885]: cluster 2026-03-09T14:34:17.028328+0000 mgr.y (mgr.24310) 114 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:19.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:18 vm11 bash[17885]: audit 2026-03-09T14:34:17.342137+0000 mgr.y (mgr.24310) 115 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:21.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:20 vm11 bash[17885]: cluster 2026-03-09T14:34:19.028685+0000 mgr.y (mgr.24310) 116 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:21.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:20 vm07 bash[22585]: cluster 2026-03-09T14:34:19.028685+0000 mgr.y (mgr.24310) 116 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:21.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:20 vm07 bash[17480]: cluster 2026-03-09T14:34:19.028685+0000 mgr.y (mgr.24310) 116 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:22.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:22 vm07 bash[22585]: cluster 2026-03-09T14:34:21.029330+0000 mgr.y (mgr.24310) 117 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:22.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:22 vm07 bash[17480]: cluster 2026-03-09T14:34:21.029330+0000 mgr.y (mgr.24310) 117 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:22.912 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:34:22 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:22] "GET /metrics HTTP/1.1" 200 214397 "" "Prometheus/2.33.4" 2026-03-09T14:34:23.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:22 vm11 bash[17885]: cluster 2026-03-09T14:34:21.029330+0000 mgr.y (mgr.24310) 117 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:23.685 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:34:23 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:23] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:34:23.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:23 vm07 bash[22585]: audit 2026-03-09T14:34:23.098952+0000 mon.b (mon.2) 94 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:34:23.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:23 vm07 bash[22585]: audit 2026-03-09T14:34:23.099873+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:34:23.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:23 vm07 bash[22585]: audit 2026-03-09T14:34:23.116621+0000 mon.b (mon.2) 95 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:34:23.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:23 vm07 bash[22585]: audit 2026-03-09T14:34:23.117620+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:34:23.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:23 vm07 bash[42609]: level=error ts=2026-03-09T14:34:23.510Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:34:23.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:23 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:23.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:34:23.912 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:23 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:23.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:34:23.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:23 vm07 bash[17480]: audit 2026-03-09T14:34:23.098952+0000 mon.b (mon.2) 94 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:34:23.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:23 vm07 bash[17480]: audit 2026-03-09T14:34:23.099873+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:34:23.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:23 vm07 bash[17480]: audit 2026-03-09T14:34:23.116621+0000 mon.b (mon.2) 95 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:34:23.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:23 vm07 bash[17480]: audit 2026-03-09T14:34:23.117620+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:34:24.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:23 vm11 bash[17885]: audit 2026-03-09T14:34:23.098952+0000 mon.b (mon.2) 94 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:34:24.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:23 vm11 bash[17885]: audit 2026-03-09T14:34:23.099873+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:34:24.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:23 vm11 bash[17885]: audit 2026-03-09T14:34:23.116621+0000 mon.b (mon.2) 95 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:34:24.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:23 vm11 bash[17885]: audit 2026-03-09T14:34:23.117620+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:34:25.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:24 vm11 bash[17885]: cluster 2026-03-09T14:34:23.029682+0000 mgr.y (mgr.24310) 118 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:25.049 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:24 vm07 bash[22585]: cluster 2026-03-09T14:34:23.029682+0000 mgr.y (mgr.24310) 118 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:25.049 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:24 vm07 bash[17480]: cluster 2026-03-09T14:34:23.029682+0000 mgr.y (mgr.24310) 118 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:26.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:25 vm11 bash[17885]: audit 2026-03-09T14:34:24.893429+0000 mon.b (mon.2) 96 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:34:26.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:25 vm11 bash[17885]: audit 2026-03-09T14:34:24.894746+0000 mon.b (mon.2) 97 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:34:26.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:25 vm11 bash[17885]: audit 2026-03-09T14:34:25.057506+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:34:26.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:25 vm07 bash[22585]: audit 2026-03-09T14:34:24.893429+0000 mon.b (mon.2) 96 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:34:26.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:25 vm07 bash[22585]: audit 2026-03-09T14:34:24.894746+0000 mon.b (mon.2) 97 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:34:26.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:25 vm07 bash[22585]: audit 2026-03-09T14:34:25.057506+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:34:26.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:25 vm07 bash[17480]: audit 2026-03-09T14:34:24.893429+0000 mon.b (mon.2) 96 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:34:26.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:25 vm07 bash[17480]: audit 2026-03-09T14:34:24.894746+0000 mon.b (mon.2) 97 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:34:26.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:25 vm07 bash[17480]: audit 2026-03-09T14:34:25.057506+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:34:27.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:26 vm11 bash[17885]: cluster 2026-03-09T14:34:25.030219+0000 mgr.y (mgr.24310) 119 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:27.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:26 vm07 bash[22585]: cluster 2026-03-09T14:34:25.030219+0000 mgr.y (mgr.24310) 119 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:27.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:26 vm07 bash[17480]: cluster 2026-03-09T14:34:25.030219+0000 mgr.y (mgr.24310) 119 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:28.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:27 vm11 bash[17885]: cluster 2026-03-09T14:34:27.030564+0000 mgr.y (mgr.24310) 120 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:28.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:27 vm11 bash[17885]: audit 2026-03-09T14:34:27.345824+0000 mgr.y (mgr.24310) 121 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:28.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:27 vm07 bash[22585]: cluster 2026-03-09T14:34:27.030564+0000 mgr.y (mgr.24310) 120 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:28.162 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:27 vm07 bash[22585]: audit 2026-03-09T14:34:27.345824+0000 mgr.y (mgr.24310) 121 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:28.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:27 vm07 bash[17480]: cluster 2026-03-09T14:34:27.030564+0000 mgr.y (mgr.24310) 120 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:28.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:27 vm07 bash[17480]: audit 2026-03-09T14:34:27.345824+0000 mgr.y (mgr.24310) 121 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:30.411 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:30 vm07 bash[22585]: cluster 2026-03-09T14:34:29.030905+0000 mgr.y (mgr.24310) 122 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:30.412 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:30 vm07 bash[17480]: cluster 2026-03-09T14:34:29.030905+0000 mgr.y (mgr.24310) 122 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:30.506 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:30 vm11 bash[17885]: cluster 2026-03-09T14:34:29.030905+0000 mgr.y (mgr.24310) 122 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:32.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:32 vm07 bash[22585]: cluster 2026-03-09T14:34:31.031439+0000 mgr.y (mgr.24310) 123 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:32.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:32 vm07 bash[17480]: cluster 2026-03-09T14:34:31.031439+0000 mgr.y (mgr.24310) 123 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:32.912 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:34:32 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:32] "GET /metrics HTTP/1.1" 200 214421 "" "Prometheus/2.33.4" 2026-03-09T14:34:33.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:32 vm11 bash[17885]: cluster 2026-03-09T14:34:31.031439+0000 mgr.y (mgr.24310) 123 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:33.756 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:34:33 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:33] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:34:33.759 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:33 vm07 bash[42609]: level=error ts=2026-03-09T14:34:33.510Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:34:33.759 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:33 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:33.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:34:33.759 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:33 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:33.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:34:34.161 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:33 vm07 bash[22585]: cluster 2026-03-09T14:34:33.031753+0000 mgr.y (mgr.24310) 124 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:34.162 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:33 vm07 bash[17480]: cluster 2026-03-09T14:34:33.031753+0000 mgr.y (mgr.24310) 124 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:34.256 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:33 vm11 bash[17885]: cluster 2026-03-09T14:34:33.031753+0000 mgr.y (mgr.24310) 124 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:36.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:36 vm07 bash[22585]: cluster 2026-03-09T14:34:35.032285+0000 mgr.y (mgr.24310) 125 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:36.911 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:36 vm07 bash[17480]: cluster 2026-03-09T14:34:35.032285+0000 mgr.y (mgr.24310) 125 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:37.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:36 vm11 bash[17885]: cluster 2026-03-09T14:34:35.032285+0000 mgr.y (mgr.24310) 125 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:38.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:38 vm07 bash[22585]: cluster 2026-03-09T14:34:37.032611+0000 mgr.y (mgr.24310) 126 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:38.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:38 vm07 bash[22585]: audit 2026-03-09T14:34:37.356410+0000 mgr.y (mgr.24310) 127 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:38.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:38 vm07 bash[17480]: cluster 2026-03-09T14:34:37.032611+0000 mgr.y (mgr.24310) 126 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:38.912 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:38 vm07 bash[17480]: audit 2026-03-09T14:34:37.356410+0000 mgr.y (mgr.24310) 127 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:39.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:38 vm11 bash[17885]: cluster 2026-03-09T14:34:37.032611+0000 mgr.y (mgr.24310) 126 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:39.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:38 vm11 bash[17885]: audit 2026-03-09T14:34:37.356410+0000 mgr.y (mgr.24310) 127 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:41.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:40 vm11 bash[17885]: cluster 2026-03-09T14:34:39.033045+0000 mgr.y (mgr.24310) 128 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:41.161 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:40 vm07 bash[22585]: cluster 2026-03-09T14:34:39.033045+0000 mgr.y (mgr.24310) 128 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:41.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:40 vm07 bash[17480]: cluster 2026-03-09T14:34:39.033045+0000 mgr.y (mgr.24310) 128 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:42.161 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:41 vm07 bash[22585]: cluster 2026-03-09T14:34:41.033668+0000 mgr.y (mgr.24310) 129 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:42.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:41 vm07 bash[17480]: cluster 2026-03-09T14:34:41.033668+0000 mgr.y (mgr.24310) 129 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:42.256 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:41 vm11 bash[17885]: cluster 2026-03-09T14:34:41.033668+0000 mgr.y (mgr.24310) 129 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:42.911 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:34:42 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:42] "GET /metrics HTTP/1.1" 200 214434 "" "Prometheus/2.33.4" 2026-03-09T14:34:43.756 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:34:43 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:43] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:34:43.911 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:43 vm07 bash[42609]: level=error ts=2026-03-09T14:34:43.511Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:34:43.911 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:43 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:43.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:34:43.911 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:43 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:43.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:34:44.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:44 vm07 bash[22585]: cluster 2026-03-09T14:34:43.034058+0000 mgr.y (mgr.24310) 130 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:44.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:44 vm07 bash[17480]: cluster 2026-03-09T14:34:43.034058+0000 mgr.y (mgr.24310) 130 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:44.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:44 vm11 bash[17885]: cluster 2026-03-09T14:34:43.034058+0000 mgr.y (mgr.24310) 130 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:46.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:46 vm07 bash[22585]: cluster 2026-03-09T14:34:45.034583+0000 mgr.y (mgr.24310) 131 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:46.911 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:46 vm07 bash[17480]: cluster 2026-03-09T14:34:45.034583+0000 mgr.y (mgr.24310) 131 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:47.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:46 vm11 bash[17885]: cluster 2026-03-09T14:34:45.034583+0000 mgr.y (mgr.24310) 131 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:48.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:48 vm07 bash[22585]: cluster 2026-03-09T14:34:47.034904+0000 mgr.y (mgr.24310) 132 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:48.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:48 vm07 bash[22585]: audit 2026-03-09T14:34:47.367035+0000 mgr.y (mgr.24310) 133 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:48.911 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:48 vm07 bash[17480]: cluster 2026-03-09T14:34:47.034904+0000 mgr.y (mgr.24310) 132 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:48.911 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:48 vm07 bash[17480]: audit 2026-03-09T14:34:47.367035+0000 mgr.y (mgr.24310) 133 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:49.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:48 vm11 bash[17885]: cluster 2026-03-09T14:34:47.034904+0000 mgr.y (mgr.24310) 132 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:49.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:48 vm11 bash[17885]: audit 2026-03-09T14:34:47.367035+0000 mgr.y (mgr.24310) 133 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:50.256 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:49 vm11 bash[17885]: cluster 2026-03-09T14:34:49.035627+0000 mgr.y (mgr.24310) 134 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:50.411 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:49 vm07 bash[22585]: cluster 2026-03-09T14:34:49.035627+0000 mgr.y (mgr.24310) 134 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:50.411 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:49 vm07 bash[17480]: cluster 2026-03-09T14:34:49.035627+0000 mgr.y (mgr.24310) 134 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:52.787 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:34:52 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:52] "GET /metrics HTTP/1.1" 200 214434 "" "Prometheus/2.33.4" 2026-03-09T14:34:53.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:52 vm07 bash[17480]: cluster 2026-03-09T14:34:51.036149+0000 mgr.y (mgr.24310) 135 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:53.161 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:52 vm07 bash[22585]: cluster 2026-03-09T14:34:51.036149+0000 mgr.y (mgr.24310) 135 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:53.256 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:52 vm11 bash[17885]: cluster 2026-03-09T14:34:51.036149+0000 mgr.y (mgr.24310) 135 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:53.756 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:34:53 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:34:53] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:34:53.796 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:53 vm07 bash[42609]: level=error ts=2026-03-09T14:34:53.512Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:34:53.796 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:53 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:53.514Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:34:53.796 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:34:53 vm07 bash[42609]: level=warn ts=2026-03-09T14:34:53.515Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:34:54.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:53 vm07 bash[17480]: cluster 2026-03-09T14:34:53.036414+0000 mgr.y (mgr.24310) 136 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:54.161 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:53 vm07 bash[22585]: cluster 2026-03-09T14:34:53.036414+0000 mgr.y (mgr.24310) 136 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:54.256 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:53 vm11 bash[17885]: cluster 2026-03-09T14:34:53.036414+0000 mgr.y (mgr.24310) 136 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:57.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:56 vm11 bash[17885]: cluster 2026-03-09T14:34:55.036999+0000 mgr.y (mgr.24310) 137 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:57.161 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:56 vm07 bash[22585]: cluster 2026-03-09T14:34:55.036999+0000 mgr.y (mgr.24310) 137 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:57.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:56 vm07 bash[17480]: cluster 2026-03-09T14:34:55.036999+0000 mgr.y (mgr.24310) 137 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:34:58.161 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:57 vm07 bash[22585]: cluster 2026-03-09T14:34:57.037324+0000 mgr.y (mgr.24310) 138 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:58.161 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:34:57 vm07 bash[22585]: audit 2026-03-09T14:34:57.375647+0000 mgr.y (mgr.24310) 139 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:58.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:57 vm07 bash[17480]: cluster 2026-03-09T14:34:57.037324+0000 mgr.y (mgr.24310) 138 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:58.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:34:57 vm07 bash[17480]: audit 2026-03-09T14:34:57.375647+0000 mgr.y (mgr.24310) 139 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:34:58.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:57 vm11 bash[17885]: cluster 2026-03-09T14:34:57.037324+0000 mgr.y (mgr.24310) 138 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:34:58.256 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:34:57 vm11 bash[17885]: audit 2026-03-09T14:34:57.375647+0000 mgr.y (mgr.24310) 139 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:00.506 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:00 vm11 bash[17885]: cluster 2026-03-09T14:34:59.037638+0000 mgr.y (mgr.24310) 140 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:00.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:00 vm07 bash[22585]: cluster 2026-03-09T14:34:59.037638+0000 mgr.y (mgr.24310) 140 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:00.661 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:00 vm07 bash[17480]: cluster 2026-03-09T14:34:59.037638+0000 mgr.y (mgr.24310) 140 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:02.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:02 vm07 bash[22585]: cluster 2026-03-09T14:35:01.038216+0000 mgr.y (mgr.24310) 141 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:02.911 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:02 vm07 bash[17480]: cluster 2026-03-09T14:35:01.038216+0000 mgr.y (mgr.24310) 141 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:02.911 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:35:02 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:02] "GET /metrics HTTP/1.1" 200 214431 "" "Prometheus/2.33.4" 2026-03-09T14:35:03.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:02 vm11 bash[17885]: cluster 2026-03-09T14:35:01.038216+0000 mgr.y (mgr.24310) 141 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:03.756 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:35:03 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:03] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:35:03.911 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:03 vm07 bash[42609]: level=error ts=2026-03-09T14:35:03.513Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:35:03.911 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:03 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:03.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:35:03.911 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:03 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:03.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:35:05.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:04 vm11 bash[17885]: cluster 2026-03-09T14:35:03.038609+0000 mgr.y (mgr.24310) 142 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:05.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:04 vm07 bash[22585]: cluster 2026-03-09T14:35:03.038609+0000 mgr.y (mgr.24310) 142 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:05.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:04 vm07 bash[17480]: cluster 2026-03-09T14:35:03.038609+0000 mgr.y (mgr.24310) 142 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:07.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:06 vm11 bash[17885]: cluster 2026-03-09T14:35:05.039217+0000 mgr.y (mgr.24310) 143 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:07.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:06 vm07 bash[22585]: cluster 2026-03-09T14:35:05.039217+0000 mgr.y (mgr.24310) 143 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:07.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:06 vm07 bash[17480]: cluster 2026-03-09T14:35:05.039217+0000 mgr.y (mgr.24310) 143 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:09.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:08 vm11 bash[17885]: cluster 2026-03-09T14:35:07.039516+0000 mgr.y (mgr.24310) 144 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:09.018 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:08 vm11 bash[17885]: audit 2026-03-09T14:35:07.383887+0000 mgr.y (mgr.24310) 145 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:09.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:08 vm07 bash[22585]: cluster 2026-03-09T14:35:07.039516+0000 mgr.y (mgr.24310) 144 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:09.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:08 vm07 bash[22585]: audit 2026-03-09T14:35:07.383887+0000 mgr.y (mgr.24310) 145 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:09.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:08 vm07 bash[17480]: cluster 2026-03-09T14:35:07.039516+0000 mgr.y (mgr.24310) 144 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:09.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:08 vm07 bash[17480]: audit 2026-03-09T14:35:07.383887+0000 mgr.y (mgr.24310) 145 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:11.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:10 vm11 bash[17885]: cluster 2026-03-09T14:35:09.039818+0000 mgr.y (mgr.24310) 146 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:11.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:10 vm07 bash[22585]: cluster 2026-03-09T14:35:09.039818+0000 mgr.y (mgr.24310) 146 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:11.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:10 vm07 bash[17480]: cluster 2026-03-09T14:35:09.039818+0000 mgr.y (mgr.24310) 146 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:11.965 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force' 2026-03-09T14:35:11.971 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:11 vm07 bash[17480]: cluster 2026-03-09T14:35:11.040367+0000 mgr.y (mgr.24310) 147 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:11.971 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:11 vm07 bash[22585]: cluster 2026-03-09T14:35:11.040367+0000 mgr.y (mgr.24310) 147 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:12.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:11 vm11 bash[17885]: cluster 2026-03-09T14:35:11.040367+0000 mgr.y (mgr.24310) 147 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:12.598 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:35:12 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:12] "GET /metrics HTTP/1.1" 200 214438 "" "Prometheus/2.33.4" 2026-03-09T14:35:12.640 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force' 2026-03-09T14:35:13.174 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set global log_to_journald false --force' 2026-03-09T14:35:13.646 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-09T14:35:13.660 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:13 vm07 bash[42609]: level=error ts=2026-03-09T14:35:13.514Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:35:13.660 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:13 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:13.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:35:13.660 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:13 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:13.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:35:13.755 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:35:13 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:13] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:35:14.193 INFO:teuthology.orchestra.run.vm07.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:35:14.319 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:35:14.322 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm07.local 2026-03-09T14:35:14.322 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done' 2026-03-09T14:35:14.369 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:14 vm07 bash[17480]: cluster 2026-03-09T14:35:13.040649+0000 mgr.y (mgr.24310) 148 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:14.369 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:14 vm07 bash[22585]: cluster 2026-03-09T14:35:13.040649+0000 mgr.y (mgr.24310) 148 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:14.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:14 vm11 bash[17885]: cluster 2026-03-09T14:35:13.040649+0000 mgr.y (mgr.24310) 148 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:14.824 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (2m) 111s ago 2m 15.5M - ba2b418f427c a61514665550 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (2m) 111s ago 2m 40.0M - 8.3.5 dad864ee21e9 540326cca8f5 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (118s) 111s ago 118s 63.6M - 3.5 e1d6a67b021e 6e71f6329b43 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443 running (5m) 111s ago 5m 398M - 17.2.0 e1d6a67b021e 1c2e5c27f796 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:9283 running (5m) 111s ago 5m 442M - 17.2.0 e1d6a67b021e df6605dd81b3 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (5m) 111s ago 5m 49.0M 2048M 17.2.0 e1d6a67b021e 47602ca6fae7 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (5m) 111s ago 5m 45.4M 2048M 17.2.0 e1d6a67b021e eac3b7829b01 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (5m) 111s ago 5m 45.0M 2048M 17.2.0 e1d6a67b021e 9c901130627b 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (2m) 111s ago 2m 8815k - 1dbe0e931976 10000a0b8245 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (2m) 111s ago 2m 7516k - 1dbe0e931976 38d6b8c74501 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (4m) 111s ago 4m 46.2M 4096M 17.2.0 e1d6a67b021e 7a4a11fbf70d 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (4m) 111s ago 4m 48.3M 4096M 17.2.0 e1d6a67b021e 15e2e23b506b 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (4m) 111s ago 4m 43.3M 4096M 17.2.0 e1d6a67b021e fe41cd2240dc 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (4m) 111s ago 4m 44.3M 4096M 17.2.0 e1d6a67b021e b07b01a0b5aa 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (3m) 111s ago 3m 46.2M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (3m) 111s ago 3m 43.8M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (3m) 111s ago 3m 44.1M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (3m) 111s ago 3m 44.2M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (2m) 111s ago 2m 38.0M - 514e6a882f6e 58ae57f001a5 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (2m) 111s ago 2m 81.5M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (2m) 111s ago 2m 81.2M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (2m) 111s ago 2m 81.5M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:35:15.169 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (2m) 111s ago 2m 81.3M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "mds": {}, 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 17 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:35:15.393 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:35:15.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:15 vm11 bash[17885]: audit 2026-03-09T14:35:14.150635+0000 mgr.y (mgr.24310) 149 : audit [DBG] from='client.14877 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:15.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:15 vm11 bash[17885]: cephadm 2026-03-09T14:35:14.151534+0000 mgr.y (mgr.24310) 150 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:35:15.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:15 vm11 bash[17885]: audit 2026-03-09T14:35:14.192280+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:35:15.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:15 vm11 bash[17885]: audit 2026-03-09T14:35:14.208226+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:35:15.506 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:15 vm11 bash[17885]: audit 2026-03-09T14:35:14.211599+0000 mon.b (mon.2) 99 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:35:15.506 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:15 vm11 bash[17885]: audit 2026-03-09T14:35:14.263978+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:35:15.506 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:15 vm11 bash[17885]: cephadm 2026-03-09T14:35:14.276432+0000 mgr.y (mgr.24310) 151 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:35:15.585 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:35:15.585 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:35:15.585 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:35:15.585 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T14:35:15.585 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "", 2026-03-09T14:35:15.585 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image" 2026-03-09T14:35:15.585 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:35:15.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:15 vm07 bash[22585]: audit 2026-03-09T14:35:14.150635+0000 mgr.y (mgr.24310) 149 : audit [DBG] from='client.14877 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:15.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:15 vm07 bash[22585]: cephadm 2026-03-09T14:35:14.151534+0000 mgr.y (mgr.24310) 150 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:35:15.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:15 vm07 bash[22585]: audit 2026-03-09T14:35:14.192280+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:35:15.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:15 vm07 bash[22585]: audit 2026-03-09T14:35:14.208226+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:35:15.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:15 vm07 bash[22585]: audit 2026-03-09T14:35:14.211599+0000 mon.b (mon.2) 99 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:35:15.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:15 vm07 bash[22585]: audit 2026-03-09T14:35:14.263978+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:35:15.662 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:15 vm07 bash[22585]: cephadm 2026-03-09T14:35:14.276432+0000 mgr.y (mgr.24310) 151 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:35:15.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:15 vm07 bash[17480]: audit 2026-03-09T14:35:14.150635+0000 mgr.y (mgr.24310) 149 : audit [DBG] from='client.14877 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:15.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:15 vm07 bash[17480]: cephadm 2026-03-09T14:35:14.151534+0000 mgr.y (mgr.24310) 150 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:35:15.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:15 vm07 bash[17480]: audit 2026-03-09T14:35:14.192280+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:35:15.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:15 vm07 bash[17480]: audit 2026-03-09T14:35:14.208226+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:35:15.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:15 vm07 bash[17480]: audit 2026-03-09T14:35:14.211599+0000 mon.b (mon.2) 99 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:35:15.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:15 vm07 bash[17480]: audit 2026-03-09T14:35:14.263978+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:35:15.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:15 vm07 bash[17480]: cephadm 2026-03-09T14:35:14.276432+0000 mgr.y (mgr.24310) 151 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T14:35:15.802 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:35:16.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:16 vm07 bash[22585]: audit 2026-03-09T14:35:14.815460+0000 mgr.y (mgr.24310) 152 : audit [DBG] from='client.24769 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:16 vm07 bash[22585]: audit 2026-03-09T14:35:14.993012+0000 mgr.y (mgr.24310) 153 : audit [DBG] from='client.24689 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:16 vm07 bash[22585]: cluster 2026-03-09T14:35:15.041166+0000 mgr.y (mgr.24310) 154 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:16 vm07 bash[22585]: audit 2026-03-09T14:35:15.166914+0000 mgr.y (mgr.24310) 155 : audit [DBG] from='client.24781 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:16 vm07 bash[22585]: audit 2026-03-09T14:35:15.395575+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.107:0/1698765397' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:16 vm07 bash[22585]: audit 2026-03-09T14:35:15.587913+0000 mgr.y (mgr.24310) 156 : audit [DBG] from='client.24793 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:16 vm07 bash[22585]: audit 2026-03-09T14:35:15.804403+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.107:0/3753492227' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:16 vm07 bash[17480]: audit 2026-03-09T14:35:14.815460+0000 mgr.y (mgr.24310) 152 : audit [DBG] from='client.24769 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:16 vm07 bash[17480]: audit 2026-03-09T14:35:14.993012+0000 mgr.y (mgr.24310) 153 : audit [DBG] from='client.24689 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:16 vm07 bash[17480]: cluster 2026-03-09T14:35:15.041166+0000 mgr.y (mgr.24310) 154 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:16 vm07 bash[17480]: audit 2026-03-09T14:35:15.166914+0000 mgr.y (mgr.24310) 155 : audit [DBG] from='client.24781 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:16 vm07 bash[17480]: audit 2026-03-09T14:35:15.395575+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.107:0/1698765397' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:16 vm07 bash[17480]: audit 2026-03-09T14:35:15.587913+0000 mgr.y (mgr.24310) 156 : audit [DBG] from='client.24793 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.661 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:16 vm07 bash[17480]: audit 2026-03-09T14:35:15.804403+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.107:0/3753492227' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:35:16.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:16 vm11 bash[17885]: audit 2026-03-09T14:35:14.815460+0000 mgr.y (mgr.24310) 152 : audit [DBG] from='client.24769 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:16 vm11 bash[17885]: audit 2026-03-09T14:35:14.993012+0000 mgr.y (mgr.24310) 153 : audit [DBG] from='client.24689 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:16 vm11 bash[17885]: cluster 2026-03-09T14:35:15.041166+0000 mgr.y (mgr.24310) 154 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:16.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:16 vm11 bash[17885]: audit 2026-03-09T14:35:15.166914+0000 mgr.y (mgr.24310) 155 : audit [DBG] from='client.24781 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:16 vm11 bash[17885]: audit 2026-03-09T14:35:15.395575+0000 mon.a (mon.0) 765 : audit [DBG] from='client.? 192.168.123.107:0/1698765397' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:35:16.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:16 vm11 bash[17885]: audit 2026-03-09T14:35:15.587913+0000 mgr.y (mgr.24310) 156 : audit [DBG] from='client.24793 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:16.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:16 vm11 bash[17885]: audit 2026-03-09T14:35:15.804403+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.107:0/3753492227' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:35:18.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:18 vm07 bash[22585]: cluster 2026-03-09T14:35:17.041493+0000 mgr.y (mgr.24310) 157 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:18.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:18 vm07 bash[22585]: audit 2026-03-09T14:35:17.391200+0000 mgr.y (mgr.24310) 158 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:18.661 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:18 vm07 bash[17480]: cluster 2026-03-09T14:35:17.041493+0000 mgr.y (mgr.24310) 157 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:18.661 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:18 vm07 bash[17480]: audit 2026-03-09T14:35:17.391200+0000 mgr.y (mgr.24310) 158 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:18.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:18 vm11 bash[17885]: cluster 2026-03-09T14:35:17.041493+0000 mgr.y (mgr.24310) 157 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:18.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:18 vm11 bash[17885]: audit 2026-03-09T14:35:17.391200+0000 mgr.y (mgr.24310) 158 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:20.660 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:20 vm07 bash[17480]: cluster 2026-03-09T14:35:19.042223+0000 mgr.y (mgr.24310) 159 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:20.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:20 vm07 bash[22585]: cluster 2026-03-09T14:35:19.042223+0000 mgr.y (mgr.24310) 159 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:20.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:20 vm11 bash[17885]: cluster 2026-03-09T14:35:19.042223+0000 mgr.y (mgr.24310) 159 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:22.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:22 vm07 bash[22585]: cluster 2026-03-09T14:35:21.042674+0000 mgr.y (mgr.24310) 160 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:22.910 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:22 vm07 bash[17480]: cluster 2026-03-09T14:35:21.042674+0000 mgr.y (mgr.24310) 160 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:22.911 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:35:22 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:22] "GET /metrics HTTP/1.1" 200 214438 "" "Prometheus/2.33.4" 2026-03-09T14:35:23.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:22 vm11 bash[17885]: cluster 2026-03-09T14:35:21.042674+0000 mgr.y (mgr.24310) 160 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:23.702 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:35:23 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:23] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:35:23.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:23 vm07 bash[22585]: audit 2026-03-09T14:35:23.101101+0000 mon.b (mon.2) 100 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:35:23.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:23 vm07 bash[22585]: audit 2026-03-09T14:35:23.103025+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:35:23.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:23 vm07 bash[22585]: audit 2026-03-09T14:35:23.118978+0000 mon.b (mon.2) 101 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:35:23.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:23 vm07 bash[22585]: audit 2026-03-09T14:35:23.120941+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:35:23.910 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:23 vm07 bash[42609]: level=error ts=2026-03-09T14:35:23.515Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:35:23.911 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:23 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:23.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:35:23.911 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:23 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:23.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:35:23.911 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:23 vm07 bash[17480]: audit 2026-03-09T14:35:23.101101+0000 mon.b (mon.2) 100 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:35:23.911 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:23 vm07 bash[17480]: audit 2026-03-09T14:35:23.103025+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:35:23.911 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:23 vm07 bash[17480]: audit 2026-03-09T14:35:23.118978+0000 mon.b (mon.2) 101 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:35:23.911 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:23 vm07 bash[17480]: audit 2026-03-09T14:35:23.120941+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:35:24.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:23 vm11 bash[17885]: audit 2026-03-09T14:35:23.101101+0000 mon.b (mon.2) 100 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:35:24.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:23 vm11 bash[17885]: audit 2026-03-09T14:35:23.103025+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:35:24.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:23 vm11 bash[17885]: audit 2026-03-09T14:35:23.118978+0000 mon.b (mon.2) 101 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:35:24.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:23 vm11 bash[17885]: audit 2026-03-09T14:35:23.120941+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:35:25.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:24 vm11 bash[17885]: cluster 2026-03-09T14:35:23.042996+0000 mgr.y (mgr.24310) 161 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:25.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:24 vm07 bash[22585]: cluster 2026-03-09T14:35:23.042996+0000 mgr.y (mgr.24310) 161 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:25.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:24 vm07 bash[17480]: cluster 2026-03-09T14:35:23.042996+0000 mgr.y (mgr.24310) 161 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:27.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:26 vm11 bash[17885]: cluster 2026-03-09T14:35:25.043558+0000 mgr.y (mgr.24310) 162 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:27.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:26 vm07 bash[22585]: cluster 2026-03-09T14:35:25.043558+0000 mgr.y (mgr.24310) 162 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:27.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:26 vm07 bash[17480]: cluster 2026-03-09T14:35:25.043558+0000 mgr.y (mgr.24310) 162 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:29.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:28 vm11 bash[17885]: cluster 2026-03-09T14:35:27.043917+0000 mgr.y (mgr.24310) 163 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:29.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:28 vm11 bash[17885]: audit 2026-03-09T14:35:27.398793+0000 mgr.y (mgr.24310) 164 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:29.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:28 vm07 bash[22585]: cluster 2026-03-09T14:35:27.043917+0000 mgr.y (mgr.24310) 163 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:29.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:28 vm07 bash[22585]: audit 2026-03-09T14:35:27.398793+0000 mgr.y (mgr.24310) 164 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:29.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:28 vm07 bash[17480]: cluster 2026-03-09T14:35:27.043917+0000 mgr.y (mgr.24310) 163 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:29.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:28 vm07 bash[17480]: audit 2026-03-09T14:35:27.398793+0000 mgr.y (mgr.24310) 164 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:30.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:29 vm11 bash[17885]: cluster 2026-03-09T14:35:29.044514+0000 mgr.y (mgr.24310) 165 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:30.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:29 vm07 bash[22585]: cluster 2026-03-09T14:35:29.044514+0000 mgr.y (mgr.24310) 165 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:30.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:29 vm07 bash[17480]: cluster 2026-03-09T14:35:29.044514+0000 mgr.y (mgr.24310) 165 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:32.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:32 vm07 bash[22585]: cluster 2026-03-09T14:35:31.044830+0000 mgr.y (mgr.24310) 166 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:32.910 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:35:32 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:32] "GET /metrics HTTP/1.1" 200 214426 "" "Prometheus/2.33.4" 2026-03-09T14:35:32.910 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:32 vm07 bash[17480]: cluster 2026-03-09T14:35:31.044830+0000 mgr.y (mgr.24310) 166 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:33.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:32 vm11 bash[17885]: cluster 2026-03-09T14:35:31.044830+0000 mgr.y (mgr.24310) 166 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:33.755 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:35:33 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:33] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:35:33.910 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:33 vm07 bash[42609]: level=error ts=2026-03-09T14:35:33.516Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:35:33.910 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:33 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:33.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:35:33.910 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:33 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:33.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:35:35.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:34 vm11 bash[17885]: cluster 2026-03-09T14:35:33.045151+0000 mgr.y (mgr.24310) 167 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:35.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:34 vm07 bash[22585]: cluster 2026-03-09T14:35:33.045151+0000 mgr.y (mgr.24310) 167 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:35.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:34 vm07 bash[17480]: cluster 2026-03-09T14:35:33.045151+0000 mgr.y (mgr.24310) 167 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:37.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:36 vm11 bash[17885]: cluster 2026-03-09T14:35:35.045758+0000 mgr.y (mgr.24310) 168 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:37.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:36 vm07 bash[22585]: cluster 2026-03-09T14:35:35.045758+0000 mgr.y (mgr.24310) 168 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:37.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:36 vm07 bash[17480]: cluster 2026-03-09T14:35:35.045758+0000 mgr.y (mgr.24310) 168 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:38.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:37 vm11 bash[17885]: cluster 2026-03-09T14:35:37.046099+0000 mgr.y (mgr.24310) 169 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:38.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:37 vm11 bash[17885]: audit 2026-03-09T14:35:37.403082+0000 mgr.y (mgr.24310) 170 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:38.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:37 vm07 bash[22585]: cluster 2026-03-09T14:35:37.046099+0000 mgr.y (mgr.24310) 169 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:38.160 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:37 vm07 bash[22585]: audit 2026-03-09T14:35:37.403082+0000 mgr.y (mgr.24310) 170 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:38.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:37 vm07 bash[17480]: cluster 2026-03-09T14:35:37.046099+0000 mgr.y (mgr.24310) 169 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:38.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:37 vm07 bash[17480]: audit 2026-03-09T14:35:37.403082+0000 mgr.y (mgr.24310) 170 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:40.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:40 vm07 bash[22585]: cluster 2026-03-09T14:35:39.046598+0000 mgr.y (mgr.24310) 171 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:40.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:40 vm07 bash[17480]: cluster 2026-03-09T14:35:39.046598+0000 mgr.y (mgr.24310) 171 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:40.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:40 vm11 bash[17885]: cluster 2026-03-09T14:35:39.046598+0000 mgr.y (mgr.24310) 171 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:42.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:41 vm11 bash[17885]: cluster 2026-03-09T14:35:41.047001+0000 mgr.y (mgr.24310) 172 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:42.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:41 vm07 bash[22585]: cluster 2026-03-09T14:35:41.047001+0000 mgr.y (mgr.24310) 172 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:42.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:41 vm07 bash[17480]: cluster 2026-03-09T14:35:41.047001+0000 mgr.y (mgr.24310) 172 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:42.910 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:35:42 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:42] "GET /metrics HTTP/1.1" 200 214385 "" "Prometheus/2.33.4" 2026-03-09T14:35:43.755 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:35:43 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:43] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:35:43.910 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:43 vm07 bash[42609]: level=error ts=2026-03-09T14:35:43.516Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:35:43.910 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:43 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:43.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:35:43.911 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:43 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:43.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:35:44.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:44 vm07 bash[22585]: cluster 2026-03-09T14:35:43.047354+0000 mgr.y (mgr.24310) 173 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:44.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:44 vm07 bash[17480]: cluster 2026-03-09T14:35:43.047354+0000 mgr.y (mgr.24310) 173 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:44.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:44 vm11 bash[17885]: cluster 2026-03-09T14:35:43.047354+0000 mgr.y (mgr.24310) 173 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:46.013 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:35:46.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:46 vm07 bash[22585]: cluster 2026-03-09T14:35:45.048002+0000 mgr.y (mgr.24310) 174 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:46.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:46 vm07 bash[17480]: cluster 2026-03-09T14:35:45.048002+0000 mgr.y (mgr.24310) 174 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:46.414 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:35:46.414 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (2m) 2m ago 3m 15.5M - ba2b418f427c a61514665550 2026-03-09T14:35:46.414 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (2m) 2m ago 2m 40.0M - 8.3.5 dad864ee21e9 540326cca8f5 2026-03-09T14:35:46.414 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (2m) 2m ago 2m 63.6M - 3.5 e1d6a67b021e 6e71f6329b43 2026-03-09T14:35:46.414 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443 running (5m) 2m ago 5m 398M - 17.2.0 e1d6a67b021e 1c2e5c27f796 2026-03-09T14:35:46.414 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:9283 running (6m) 2m ago 6m 442M - 17.2.0 e1d6a67b021e df6605dd81b3 2026-03-09T14:35:46.414 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (6m) 2m ago 6m 49.0M 2048M 17.2.0 e1d6a67b021e 47602ca6fae7 2026-03-09T14:35:46.414 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (5m) 2m ago 5m 45.4M 2048M 17.2.0 e1d6a67b021e eac3b7829b01 2026-03-09T14:35:46.414 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (5m) 2m ago 5m 45.0M 2048M 17.2.0 e1d6a67b021e 9c901130627b 2026-03-09T14:35:46.414 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (3m) 2m ago 3m 8815k - 1dbe0e931976 10000a0b8245 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (3m) 2m ago 3m 7516k - 1dbe0e931976 38d6b8c74501 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (5m) 2m ago 5m 46.2M 4096M 17.2.0 e1d6a67b021e 7a4a11fbf70d 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (5m) 2m ago 5m 48.3M 4096M 17.2.0 e1d6a67b021e 15e2e23b506b 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (4m) 2m ago 4m 43.3M 4096M 17.2.0 e1d6a67b021e fe41cd2240dc 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (4m) 2m ago 4m 44.3M 4096M 17.2.0 e1d6a67b021e b07b01a0b5aa 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (4m) 2m ago 4m 46.2M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (4m) 2m ago 4m 43.8M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (3m) 2m ago 3m 44.1M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (3m) 2m ago 3m 44.2M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (2m) 2m ago 3m 38.0M - 514e6a882f6e 58ae57f001a5 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (2m) 2m ago 2m 81.5M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (2m) 2m ago 2m 81.2M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (2m) 2m ago 2m 81.5M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:35:46.415 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (2m) 2m ago 2m 81.3M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:35:46.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:46 vm11 bash[17885]: cluster 2026-03-09T14:35:45.048002+0000 mgr.y (mgr.24310) 174 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "mds": {}, 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 17 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:35:46.667 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:35:46.869 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:35:46.869 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:35:46.869 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:35:46.869 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T14:35:46.869 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "", 2026-03-09T14:35:46.869 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image" 2026-03-09T14:35:46.869 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:35:47.107 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:35:47.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:47 vm07 bash[22585]: audit 2026-03-09T14:35:46.005837+0000 mgr.y (mgr.24310) 175 : audit [DBG] from='client.24802 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:47.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:47 vm07 bash[22585]: audit 2026-03-09T14:35:46.218027+0000 mgr.y (mgr.24310) 176 : audit [DBG] from='client.14913 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:47.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:47 vm07 bash[22585]: audit 2026-03-09T14:35:46.412856+0000 mgr.y (mgr.24310) 177 : audit [DBG] from='client.14916 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:47.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:47 vm07 bash[22585]: audit 2026-03-09T14:35:46.670522+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.107:0/272375362' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:35:47.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:47 vm07 bash[22585]: audit 2026-03-09T14:35:47.110626+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.107:0/1249796260' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:35:47.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:47 vm07 bash[17480]: audit 2026-03-09T14:35:46.005837+0000 mgr.y (mgr.24310) 175 : audit [DBG] from='client.24802 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:47.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:47 vm07 bash[17480]: audit 2026-03-09T14:35:46.218027+0000 mgr.y (mgr.24310) 176 : audit [DBG] from='client.14913 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:47.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:47 vm07 bash[17480]: audit 2026-03-09T14:35:46.412856+0000 mgr.y (mgr.24310) 177 : audit [DBG] from='client.14916 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:47.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:47 vm07 bash[17480]: audit 2026-03-09T14:35:46.670522+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.107:0/272375362' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:35:47.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:47 vm07 bash[17480]: audit 2026-03-09T14:35:47.110626+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.107:0/1249796260' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:35:47.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:47 vm11 bash[17885]: audit 2026-03-09T14:35:46.005837+0000 mgr.y (mgr.24310) 175 : audit [DBG] from='client.24802 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:47.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:47 vm11 bash[17885]: audit 2026-03-09T14:35:46.218027+0000 mgr.y (mgr.24310) 176 : audit [DBG] from='client.14913 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:47.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:47 vm11 bash[17885]: audit 2026-03-09T14:35:46.412856+0000 mgr.y (mgr.24310) 177 : audit [DBG] from='client.14916 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:47.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:47 vm11 bash[17885]: audit 2026-03-09T14:35:46.670522+0000 mon.a (mon.0) 768 : audit [DBG] from='client.? 192.168.123.107:0/272375362' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:35:47.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:47 vm11 bash[17885]: audit 2026-03-09T14:35:47.110626+0000 mon.c (mon.1) 35 : audit [DBG] from='client.? 192.168.123.107:0/1249796260' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:35:48.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:48 vm11 bash[17885]: audit 2026-03-09T14:35:46.871911+0000 mgr.y (mgr.24310) 178 : audit [DBG] from='client.24823 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:48.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:48 vm11 bash[17885]: cluster 2026-03-09T14:35:47.048365+0000 mgr.y (mgr.24310) 179 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:48.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:48 vm11 bash[17885]: audit 2026-03-09T14:35:47.413257+0000 mgr.y (mgr.24310) 180 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:48.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:48 vm07 bash[22585]: audit 2026-03-09T14:35:46.871911+0000 mgr.y (mgr.24310) 178 : audit [DBG] from='client.24823 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:48.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:48 vm07 bash[22585]: cluster 2026-03-09T14:35:47.048365+0000 mgr.y (mgr.24310) 179 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:48.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:48 vm07 bash[22585]: audit 2026-03-09T14:35:47.413257+0000 mgr.y (mgr.24310) 180 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:48.660 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:48 vm07 bash[17480]: audit 2026-03-09T14:35:46.871911+0000 mgr.y (mgr.24310) 178 : audit [DBG] from='client.24823 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:35:48.660 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:48 vm07 bash[17480]: cluster 2026-03-09T14:35:47.048365+0000 mgr.y (mgr.24310) 179 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:48.660 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:48 vm07 bash[17480]: audit 2026-03-09T14:35:47.413257+0000 mgr.y (mgr.24310) 180 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:50.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:50 vm07 bash[22585]: cluster 2026-03-09T14:35:49.048949+0000 mgr.y (mgr.24310) 181 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:50.660 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:50 vm07 bash[17480]: cluster 2026-03-09T14:35:49.048949+0000 mgr.y (mgr.24310) 181 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:50.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:50 vm11 bash[17885]: cluster 2026-03-09T14:35:49.048949+0000 mgr.y (mgr.24310) 181 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:52.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:52 vm07 bash[22585]: cluster 2026-03-09T14:35:51.049276+0000 mgr.y (mgr.24310) 182 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:52.660 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:52 vm07 bash[17480]: cluster 2026-03-09T14:35:51.049276+0000 mgr.y (mgr.24310) 182 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:52.660 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:35:52 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:52] "GET /metrics HTTP/1.1" 200 214385 "" "Prometheus/2.33.4" 2026-03-09T14:35:52.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:52 vm11 bash[17885]: cluster 2026-03-09T14:35:51.049276+0000 mgr.y (mgr.24310) 182 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:53.755 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:35:53 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:35:53] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:35:53.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:53 vm07 bash[42609]: level=error ts=2026-03-09T14:35:53.516Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:35:53.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:53 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:53.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:35:53.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:35:53 vm07 bash[42609]: level=warn ts=2026-03-09T14:35:53.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:35:54.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:54 vm07 bash[22585]: cluster 2026-03-09T14:35:53.049655+0000 mgr.y (mgr.24310) 183 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:54.660 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:54 vm07 bash[17480]: cluster 2026-03-09T14:35:53.049655+0000 mgr.y (mgr.24310) 183 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:54.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:54 vm11 bash[17885]: cluster 2026-03-09T14:35:53.049655+0000 mgr.y (mgr.24310) 183 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:56.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:56 vm07 bash[22585]: cluster 2026-03-09T14:35:55.050172+0000 mgr.y (mgr.24310) 184 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:56.660 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:56 vm07 bash[17480]: cluster 2026-03-09T14:35:55.050172+0000 mgr.y (mgr.24310) 184 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:56.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:56 vm11 bash[17885]: cluster 2026-03-09T14:35:55.050172+0000 mgr.y (mgr.24310) 184 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:35:58.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:58 vm07 bash[22585]: cluster 2026-03-09T14:35:57.050474+0000 mgr.y (mgr.24310) 185 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:58.660 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:35:58 vm07 bash[22585]: audit 2026-03-09T14:35:57.423098+0000 mgr.y (mgr.24310) 186 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:58.660 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:58 vm07 bash[17480]: cluster 2026-03-09T14:35:57.050474+0000 mgr.y (mgr.24310) 185 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:58.660 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:35:58 vm07 bash[17480]: audit 2026-03-09T14:35:57.423098+0000 mgr.y (mgr.24310) 186 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:35:58.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:58 vm11 bash[17885]: cluster 2026-03-09T14:35:57.050474+0000 mgr.y (mgr.24310) 185 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:35:58.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:35:58 vm11 bash[17885]: audit 2026-03-09T14:35:57.423098+0000 mgr.y (mgr.24310) 186 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:01.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:00 vm07 bash[22585]: cluster 2026-03-09T14:35:59.050924+0000 mgr.y (mgr.24310) 187 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:01.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:00 vm07 bash[17480]: cluster 2026-03-09T14:35:59.050924+0000 mgr.y (mgr.24310) 187 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:01.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:00 vm11 bash[17885]: cluster 2026-03-09T14:35:59.050924+0000 mgr.y (mgr.24310) 187 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:02.534 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:02 vm07 bash[22585]: cluster 2026-03-09T14:36:01.051232+0000 mgr.y (mgr.24310) 188 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:02.534 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:02 vm07 bash[17480]: cluster 2026-03-09T14:36:01.051232+0000 mgr.y (mgr.24310) 188 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:02.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:02 vm11 bash[17885]: cluster 2026-03-09T14:36:01.051232+0000 mgr.y (mgr.24310) 188 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:02.909 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:36:02 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:02] "GET /metrics HTTP/1.1" 200 214370 "" "Prometheus/2.33.4" 2026-03-09T14:36:03.755 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:36:03 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:03] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:36:03.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:03 vm07 bash[42609]: level=error ts=2026-03-09T14:36:03.517Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:36:03.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:03 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:03.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:36:03.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:03 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:03.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:36:05.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:04 vm11 bash[17885]: cluster 2026-03-09T14:36:03.053934+0000 mgr.y (mgr.24310) 189 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:05.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:04 vm07 bash[22585]: cluster 2026-03-09T14:36:03.053934+0000 mgr.y (mgr.24310) 189 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:05.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:04 vm07 bash[17480]: cluster 2026-03-09T14:36:03.053934+0000 mgr.y (mgr.24310) 189 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:06.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:06 vm11 bash[17885]: cluster 2026-03-09T14:36:05.054557+0000 mgr.y (mgr.24310) 190 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:06.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:06 vm11 bash[17885]: audit 2026-03-09T14:36:05.506236+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:36:06.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:06 vm11 bash[17885]: audit 2026-03-09T14:36:05.507429+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:36:06.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:06 vm11 bash[17885]: cephadm 2026-03-09T14:36:05.508533+0000 mgr.y (mgr.24310) 191 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-09T14:36:06.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:06 vm11 bash[17885]: cephadm 2026-03-09T14:36:05.508567+0000 mgr.y (mgr.24310) 192 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T14:36:06.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:06 vm11 bash[17885]: audit 2026-03-09T14:36:05.514643+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:36:06.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:06 vm11 bash[17885]: cephadm 2026-03-09T14:36:05.515599+0000 mgr.y (mgr.24310) 193 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-09T14:36:06.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:06 vm11 bash[17885]: cephadm 2026-03-09T14:36:05.655233+0000 mgr.y (mgr.24310) 194 : cephadm [INF] Upgrade: Pulling quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on vm11 2026-03-09T14:36:06.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:06 vm07 bash[22585]: cluster 2026-03-09T14:36:05.054557+0000 mgr.y (mgr.24310) 190 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:06.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:06 vm07 bash[22585]: audit 2026-03-09T14:36:05.506236+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:36:06.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:06 vm07 bash[22585]: audit 2026-03-09T14:36:05.507429+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:36:06.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:06 vm07 bash[22585]: cephadm 2026-03-09T14:36:05.508533+0000 mgr.y (mgr.24310) 191 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-09T14:36:06.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:06 vm07 bash[22585]: cephadm 2026-03-09T14:36:05.508567+0000 mgr.y (mgr.24310) 192 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T14:36:06.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:06 vm07 bash[22585]: audit 2026-03-09T14:36:05.514643+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:36:06.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:06 vm07 bash[22585]: cephadm 2026-03-09T14:36:05.515599+0000 mgr.y (mgr.24310) 193 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-09T14:36:06.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:06 vm07 bash[22585]: cephadm 2026-03-09T14:36:05.655233+0000 mgr.y (mgr.24310) 194 : cephadm [INF] Upgrade: Pulling quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on vm11 2026-03-09T14:36:06.910 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:06 vm07 bash[17480]: cluster 2026-03-09T14:36:05.054557+0000 mgr.y (mgr.24310) 190 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:06.910 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:06 vm07 bash[17480]: audit 2026-03-09T14:36:05.506236+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:36:06.910 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:06 vm07 bash[17480]: audit 2026-03-09T14:36:05.507429+0000 mon.b (mon.2) 102 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:36:06.910 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:06 vm07 bash[17480]: cephadm 2026-03-09T14:36:05.508533+0000 mgr.y (mgr.24310) 191 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (unknown) 2026-03-09T14:36:06.910 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:06 vm07 bash[17480]: cephadm 2026-03-09T14:36:05.508567+0000 mgr.y (mgr.24310) 192 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-09T14:36:06.910 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:06 vm07 bash[17480]: audit 2026-03-09T14:36:05.514643+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:36:06.910 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:06 vm07 bash[17480]: cephadm 2026-03-09T14:36:05.515599+0000 mgr.y (mgr.24310) 193 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-09T14:36:06.910 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:06 vm07 bash[17480]: cephadm 2026-03-09T14:36:05.655233+0000 mgr.y (mgr.24310) 194 : cephadm [INF] Upgrade: Pulling quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on vm11 2026-03-09T14:36:08.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:08 vm07 bash[22585]: cluster 2026-03-09T14:36:07.054967+0000 mgr.y (mgr.24310) 195 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T14:36:08.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:08 vm07 bash[22585]: audit 2026-03-09T14:36:07.431651+0000 mgr.y (mgr.24310) 196 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:08.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:08 vm07 bash[17480]: cluster 2026-03-09T14:36:07.054967+0000 mgr.y (mgr.24310) 195 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T14:36:08.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:08 vm07 bash[17480]: audit 2026-03-09T14:36:07.431651+0000 mgr.y (mgr.24310) 196 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:09.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:08 vm11 bash[17885]: cluster 2026-03-09T14:36:07.054967+0000 mgr.y (mgr.24310) 195 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T14:36:09.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:08 vm11 bash[17885]: audit 2026-03-09T14:36:07.431651+0000 mgr.y (mgr.24310) 196 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:10.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:10 vm07 bash[22585]: cluster 2026-03-09T14:36:09.055496+0000 mgr.y (mgr.24310) 197 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:10.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:10 vm07 bash[17480]: cluster 2026-03-09T14:36:09.055496+0000 mgr.y (mgr.24310) 197 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:11.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:10 vm11 bash[17885]: cluster 2026-03-09T14:36:09.055496+0000 mgr.y (mgr.24310) 197 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:12.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:11 vm11 bash[17885]: cluster 2026-03-09T14:36:11.055836+0000 mgr.y (mgr.24310) 198 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T14:36:12.409 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:11 vm07 bash[22585]: cluster 2026-03-09T14:36:11.055836+0000 mgr.y (mgr.24310) 198 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T14:36:12.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:11 vm07 bash[17480]: cluster 2026-03-09T14:36:11.055836+0000 mgr.y (mgr.24310) 198 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T14:36:12.909 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:36:12 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:12] "GET /metrics HTTP/1.1" 200 214373 "" "Prometheus/2.33.4" 2026-03-09T14:36:13.755 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:36:13 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:13] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:36:13.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:13 vm07 bash[42609]: level=error ts=2026-03-09T14:36:13.518Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:36:13.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:13 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:13.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:36:13.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:13 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:13.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:36:14.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:14 vm11 bash[17885]: cluster 2026-03-09T14:36:13.056174+0000 mgr.y (mgr.24310) 199 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T14:36:14.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:14 vm07 bash[22585]: cluster 2026-03-09T14:36:13.056174+0000 mgr.y (mgr.24310) 199 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T14:36:14.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:14 vm07 bash[17480]: cluster 2026-03-09T14:36:13.056174+0000 mgr.y (mgr.24310) 199 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-09T14:36:16.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:16 vm11 bash[17885]: cluster 2026-03-09T14:36:15.056707+0000 mgr.y (mgr.24310) 200 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:16.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:16 vm07 bash[22585]: cluster 2026-03-09T14:36:15.056707+0000 mgr.y (mgr.24310) 200 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:16.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:16 vm07 bash[17480]: cluster 2026-03-09T14:36:15.056707+0000 mgr.y (mgr.24310) 200 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:17.307 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:36:17.669 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:36:17.669 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (3m) 2m ago 3m 15.5M - ba2b418f427c a61514665550 2026-03-09T14:36:17.669 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (3m) 2m ago 3m 40.0M - 8.3.5 dad864ee21e9 540326cca8f5 2026-03-09T14:36:17.669 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (3m) 2m ago 3m 63.6M - 3.5 e1d6a67b021e 6e71f6329b43 2026-03-09T14:36:17.669 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443 running (6m) 2m ago 6m 398M - 17.2.0 e1d6a67b021e 1c2e5c27f796 2026-03-09T14:36:17.669 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:9283 running (6m) 2m ago 6m 442M - 17.2.0 e1d6a67b021e df6605dd81b3 2026-03-09T14:36:17.669 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (6m) 2m ago 6m 49.0M 2048M 17.2.0 e1d6a67b021e 47602ca6fae7 2026-03-09T14:36:17.669 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (6m) 2m ago 6m 45.4M 2048M 17.2.0 e1d6a67b021e eac3b7829b01 2026-03-09T14:36:17.669 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (6m) 2m ago 6m 45.0M 2048M 17.2.0 e1d6a67b021e 9c901130627b 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (3m) 2m ago 3m 8815k - 1dbe0e931976 10000a0b8245 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (3m) 2m ago 3m 7516k - 1dbe0e931976 38d6b8c74501 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (5m) 2m ago 6m 46.2M 4096M 17.2.0 e1d6a67b021e 7a4a11fbf70d 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (5m) 2m ago 5m 48.3M 4096M 17.2.0 e1d6a67b021e 15e2e23b506b 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (5m) 2m ago 5m 43.3M 4096M 17.2.0 e1d6a67b021e fe41cd2240dc 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (5m) 2m ago 5m 44.3M 4096M 17.2.0 e1d6a67b021e b07b01a0b5aa 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (4m) 2m ago 4m 46.2M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (4m) 2m ago 4m 43.8M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (4m) 2m ago 4m 44.1M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (4m) 2m ago 4m 44.2M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (3m) 2m ago 3m 38.0M - 514e6a882f6e 58ae57f001a5 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (3m) 2m ago 3m 81.5M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (3m) 2m ago 3m 81.2M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (3m) 2m ago 3m 81.5M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:36:17.670 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (3m) 2m ago 3m 81.3M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:36:17.872 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:36:17.872 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:36:17.872 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T14:36:17.872 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:36:17.872 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:36:17.872 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: "mds": {}, 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 17 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:36:17.873 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:36:18.047 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:36:18.047 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:36:18.047 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:36:18.047 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T14:36:18.047 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "0/23 daemons upgraded", 2026-03-09T14:36:18.047 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image on host vm11" 2026-03-09T14:36:18.048 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:36:18.273 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:18 vm07 bash[22585]: cluster 2026-03-09T14:36:17.057062+0000 mgr.y (mgr.24310) 201 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:18 vm07 bash[22585]: audit 2026-03-09T14:36:17.300446+0000 mgr.y (mgr.24310) 202 : audit [DBG] from='client.24835 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:18 vm07 bash[22585]: audit 2026-03-09T14:36:17.442049+0000 mgr.y (mgr.24310) 203 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:18 vm07 bash[22585]: audit 2026-03-09T14:36:17.484816+0000 mgr.y (mgr.24310) 204 : audit [DBG] from='client.24838 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:18 vm07 bash[22585]: audit 2026-03-09T14:36:17.668256+0000 mgr.y (mgr.24310) 205 : audit [DBG] from='client.14940 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:18 vm07 bash[22585]: audit 2026-03-09T14:36:17.876347+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.107:0/1873566642' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:18 vm07 bash[17480]: cluster 2026-03-09T14:36:17.057062+0000 mgr.y (mgr.24310) 201 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:18 vm07 bash[17480]: audit 2026-03-09T14:36:17.300446+0000 mgr.y (mgr.24310) 202 : audit [DBG] from='client.24835 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:18 vm07 bash[17480]: audit 2026-03-09T14:36:17.442049+0000 mgr.y (mgr.24310) 203 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:18 vm07 bash[17480]: audit 2026-03-09T14:36:17.484816+0000 mgr.y (mgr.24310) 204 : audit [DBG] from='client.24838 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:18 vm07 bash[17480]: audit 2026-03-09T14:36:17.668256+0000 mgr.y (mgr.24310) 205 : audit [DBG] from='client.14940 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:18.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:18 vm07 bash[17480]: audit 2026-03-09T14:36:17.876347+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.107:0/1873566642' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:36:18.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:18 vm11 bash[17885]: cluster 2026-03-09T14:36:17.057062+0000 mgr.y (mgr.24310) 201 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:18.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:18 vm11 bash[17885]: audit 2026-03-09T14:36:17.300446+0000 mgr.y (mgr.24310) 202 : audit [DBG] from='client.24835 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:18.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:18 vm11 bash[17885]: audit 2026-03-09T14:36:17.442049+0000 mgr.y (mgr.24310) 203 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:18.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:18 vm11 bash[17885]: audit 2026-03-09T14:36:17.484816+0000 mgr.y (mgr.24310) 204 : audit [DBG] from='client.24838 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:18.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:18 vm11 bash[17885]: audit 2026-03-09T14:36:17.668256+0000 mgr.y (mgr.24310) 205 : audit [DBG] from='client.14940 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:18.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:18 vm11 bash[17885]: audit 2026-03-09T14:36:17.876347+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.107:0/1873566642' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:36:19.271 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:19 vm07 bash[17480]: audit 2026-03-09T14:36:18.050928+0000 mgr.y (mgr.24310) 206 : audit [DBG] from='client.24847 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:19.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:19 vm07 bash[22585]: audit 2026-03-09T14:36:18.050928+0000 mgr.y (mgr.24310) 206 : audit [DBG] from='client.24847 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:19.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:19 vm07 bash[22585]: audit 2026-03-09T14:36:18.276447+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.107:0/548699775' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:36:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:19 vm07 bash[17480]: audit 2026-03-09T14:36:18.276447+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.107:0/548699775' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:36:19.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:19 vm11 bash[17885]: audit 2026-03-09T14:36:18.050928+0000 mgr.y (mgr.24310) 206 : audit [DBG] from='client.24847 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:19.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:19 vm11 bash[17885]: audit 2026-03-09T14:36:18.276447+0000 mon.c (mon.1) 36 : audit [DBG] from='client.? 192.168.123.107:0/548699775' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:36:20.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:20 vm07 bash[22585]: cluster 2026-03-09T14:36:19.057542+0000 mgr.y (mgr.24310) 207 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:20.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:20 vm07 bash[17480]: cluster 2026-03-09T14:36:19.057542+0000 mgr.y (mgr.24310) 207 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:20.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:20 vm11 bash[17885]: cluster 2026-03-09T14:36:19.057542+0000 mgr.y (mgr.24310) 207 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:22.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:21 vm11 bash[17885]: cluster 2026-03-09T14:36:21.057917+0000 mgr.y (mgr.24310) 208 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:22.409 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:21 vm07 bash[22585]: cluster 2026-03-09T14:36:21.057917+0000 mgr.y (mgr.24310) 208 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:22.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:21 vm07 bash[17480]: cluster 2026-03-09T14:36:21.057917+0000 mgr.y (mgr.24310) 208 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:22.909 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:36:22 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:22] "GET /metrics HTTP/1.1" 200 214373 "" "Prometheus/2.33.4" 2026-03-09T14:36:23.409 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:23 vm07 bash[22585]: audit 2026-03-09T14:36:23.103612+0000 mon.b (mon.2) 103 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:36:23.409 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:23 vm07 bash[22585]: audit 2026-03-09T14:36:23.105990+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:36:23.409 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:23 vm07 bash[22585]: audit 2026-03-09T14:36:23.121515+0000 mon.b (mon.2) 104 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:36:23.409 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:23 vm07 bash[22585]: audit 2026-03-09T14:36:23.123874+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:36:23.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:23 vm07 bash[17480]: audit 2026-03-09T14:36:23.103612+0000 mon.b (mon.2) 103 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:36:23.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:23 vm07 bash[17480]: audit 2026-03-09T14:36:23.105990+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:36:23.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:23 vm07 bash[17480]: audit 2026-03-09T14:36:23.121515+0000 mon.b (mon.2) 104 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:36:23.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:23 vm07 bash[17480]: audit 2026-03-09T14:36:23.123874+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:36:23.432 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:23 vm11 bash[17885]: audit 2026-03-09T14:36:23.103612+0000 mon.b (mon.2) 103 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:36:23.432 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:23 vm11 bash[17885]: audit 2026-03-09T14:36:23.105990+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:36:23.432 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:23 vm11 bash[17885]: audit 2026-03-09T14:36:23.121515+0000 mon.b (mon.2) 104 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:36:23.432 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:23 vm11 bash[17885]: audit 2026-03-09T14:36:23.123874+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:36:23.755 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:36:23 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:23] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:36:23.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:23 vm07 bash[42609]: level=error ts=2026-03-09T14:36:23.521Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:36:23.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:23 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:23.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:36:23.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:23 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:23.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:36:24.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:24 vm11 bash[17885]: cluster 2026-03-09T14:36:23.058334+0000 mgr.y (mgr.24310) 209 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:24.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:24 vm07 bash[22585]: cluster 2026-03-09T14:36:23.058334+0000 mgr.y (mgr.24310) 209 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:24.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:24 vm07 bash[17480]: cluster 2026-03-09T14:36:23.058334+0000 mgr.y (mgr.24310) 209 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:26.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:26 vm11 bash[17885]: cluster 2026-03-09T14:36:25.058899+0000 mgr.y (mgr.24310) 210 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:26.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:26 vm07 bash[22585]: cluster 2026-03-09T14:36:25.058899+0000 mgr.y (mgr.24310) 210 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:26.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:26 vm07 bash[17480]: cluster 2026-03-09T14:36:25.058899+0000 mgr.y (mgr.24310) 210 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:28.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:28 vm11 bash[17885]: cluster 2026-03-09T14:36:27.059253+0000 mgr.y (mgr.24310) 211 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:28.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:28 vm11 bash[17885]: audit 2026-03-09T14:36:27.450092+0000 mgr.y (mgr.24310) 212 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:28.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:28 vm07 bash[22585]: cluster 2026-03-09T14:36:27.059253+0000 mgr.y (mgr.24310) 211 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:28.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:28 vm07 bash[22585]: audit 2026-03-09T14:36:27.450092+0000 mgr.y (mgr.24310) 212 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:28.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:28 vm07 bash[17480]: cluster 2026-03-09T14:36:27.059253+0000 mgr.y (mgr.24310) 211 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:28.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:28 vm07 bash[17480]: audit 2026-03-09T14:36:27.450092+0000 mgr.y (mgr.24310) 212 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:30.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:30 vm11 bash[17885]: cluster 2026-03-09T14:36:29.059782+0000 mgr.y (mgr.24310) 213 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:30.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:30 vm07 bash[22585]: cluster 2026-03-09T14:36:29.059782+0000 mgr.y (mgr.24310) 213 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:30.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:30 vm07 bash[17480]: cluster 2026-03-09T14:36:29.059782+0000 mgr.y (mgr.24310) 213 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:32.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:31 vm11 bash[17885]: cluster 2026-03-09T14:36:31.060119+0000 mgr.y (mgr.24310) 214 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:32.409 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:31 vm07 bash[22585]: cluster 2026-03-09T14:36:31.060119+0000 mgr.y (mgr.24310) 214 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:32.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:31 vm07 bash[17480]: cluster 2026-03-09T14:36:31.060119+0000 mgr.y (mgr.24310) 214 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:32.909 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:36:32 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:32] "GET /metrics HTTP/1.1" 200 214362 "" "Prometheus/2.33.4" 2026-03-09T14:36:33.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:36:33 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:33] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:36:33.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:33 vm07 bash[42609]: level=error ts=2026-03-09T14:36:33.522Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:36:33.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:33 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:33.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:36:33.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:33 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:33.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:36:34.409 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:34 vm07 bash[22585]: cluster 2026-03-09T14:36:33.060415+0000 mgr.y (mgr.24310) 215 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:34.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:34 vm07 bash[17480]: cluster 2026-03-09T14:36:33.060415+0000 mgr.y (mgr.24310) 215 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:34.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:34 vm11 bash[17885]: cluster 2026-03-09T14:36:33.060415+0000 mgr.y (mgr.24310) 215 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:36.409 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:36 vm07 bash[22585]: cluster 2026-03-09T14:36:35.060928+0000 mgr.y (mgr.24310) 216 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:36.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:36 vm07 bash[17480]: cluster 2026-03-09T14:36:35.060928+0000 mgr.y (mgr.24310) 216 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:36.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:36 vm11 bash[17885]: cluster 2026-03-09T14:36:35.060928+0000 mgr.y (mgr.24310) 216 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:38.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:38 vm11 bash[17885]: cluster 2026-03-09T14:36:37.061369+0000 mgr.y (mgr.24310) 217 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:38.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:38 vm11 bash[17885]: audit 2026-03-09T14:36:37.456949+0000 mgr.y (mgr.24310) 218 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:38.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:38 vm07 bash[22585]: cluster 2026-03-09T14:36:37.061369+0000 mgr.y (mgr.24310) 217 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:38.659 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:38 vm07 bash[22585]: audit 2026-03-09T14:36:37.456949+0000 mgr.y (mgr.24310) 218 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:38.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:38 vm07 bash[17480]: cluster 2026-03-09T14:36:37.061369+0000 mgr.y (mgr.24310) 217 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:38.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:38 vm07 bash[17480]: audit 2026-03-09T14:36:37.456949+0000 mgr.y (mgr.24310) 218 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:40.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:40 vm11 bash[17885]: cluster 2026-03-09T14:36:39.061899+0000 mgr.y (mgr.24310) 219 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:40.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:40 vm07 bash[22585]: cluster 2026-03-09T14:36:39.061899+0000 mgr.y (mgr.24310) 219 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:40.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:40 vm07 bash[17480]: cluster 2026-03-09T14:36:39.061899+0000 mgr.y (mgr.24310) 219 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:41 vm11 bash[17885]: cluster 2026-03-09T14:36:41.062209+0000 mgr.y (mgr.24310) 220 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:42.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:41 vm07 bash[22585]: cluster 2026-03-09T14:36:41.062209+0000 mgr.y (mgr.24310) 220 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:42.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:41 vm07 bash[17480]: cluster 2026-03-09T14:36:41.062209+0000 mgr.y (mgr.24310) 220 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:42.908 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:36:42 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:42] "GET /metrics HTTP/1.1" 200 214377 "" "Prometheus/2.33.4" 2026-03-09T14:36:43.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:36:43 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:43] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:36:43.768 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:43 vm07 bash[42609]: level=error ts=2026-03-09T14:36:43.522Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:36:43.768 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:43 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:43.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:36:43.768 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:43 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:43.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:36:44.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:44 vm07 bash[22585]: cluster 2026-03-09T14:36:43.062503+0000 mgr.y (mgr.24310) 221 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:44.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:44 vm07 bash[17480]: cluster 2026-03-09T14:36:43.062503+0000 mgr.y (mgr.24310) 221 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:44.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:44 vm11 bash[17885]: cluster 2026-03-09T14:36:43.062503+0000 mgr.y (mgr.24310) 221 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:46.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:46 vm07 bash[22585]: cluster 2026-03-09T14:36:45.063037+0000 mgr.y (mgr.24310) 222 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:46.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:46 vm07 bash[17480]: cluster 2026-03-09T14:36:45.063037+0000 mgr.y (mgr.24310) 222 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:46.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:46 vm11 bash[17885]: cluster 2026-03-09T14:36:45.063037+0000 mgr.y (mgr.24310) 222 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:48.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:48 vm07 bash[22585]: cluster 2026-03-09T14:36:47.063364+0000 mgr.y (mgr.24310) 223 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:48.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:48 vm07 bash[22585]: audit 2026-03-09T14:36:47.466688+0000 mgr.y (mgr.24310) 224 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:48.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:48 vm07 bash[17480]: cluster 2026-03-09T14:36:47.063364+0000 mgr.y (mgr.24310) 223 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:48.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:48 vm07 bash[17480]: audit 2026-03-09T14:36:47.466688+0000 mgr.y (mgr.24310) 224 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:48.470 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:36:48.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:48 vm11 bash[17885]: cluster 2026-03-09T14:36:47.063364+0000 mgr.y (mgr.24310) 223 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:48.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:48 vm11 bash[17885]: audit 2026-03-09T14:36:47.466688+0000 mgr.y (mgr.24310) 224 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:48.820 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:36:48.820 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (3m) 3m ago 4m 15.5M - ba2b418f427c a61514665550 2026-03-09T14:36:48.820 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (3m) 3m ago 3m 40.0M - 8.3.5 dad864ee21e9 540326cca8f5 2026-03-09T14:36:48.820 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (3m) 3m ago 3m 63.6M - 3.5 e1d6a67b021e 6e71f6329b43 2026-03-09T14:36:48.820 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443 running (6m) 3m ago 6m 398M - 17.2.0 e1d6a67b021e 1c2e5c27f796 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:9283 running (7m) 3m ago 7m 442M - 17.2.0 e1d6a67b021e df6605dd81b3 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (7m) 3m ago 7m 49.0M 2048M 17.2.0 e1d6a67b021e 47602ca6fae7 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (6m) 3m ago 6m 45.4M 2048M 17.2.0 e1d6a67b021e eac3b7829b01 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (6m) 3m ago 6m 45.0M 2048M 17.2.0 e1d6a67b021e 9c901130627b 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (4m) 3m ago 4m 8815k - 1dbe0e931976 10000a0b8245 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (4m) 3m ago 4m 7516k - 1dbe0e931976 38d6b8c74501 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (6m) 3m ago 6m 46.2M 4096M 17.2.0 e1d6a67b021e 7a4a11fbf70d 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (6m) 3m ago 6m 48.3M 4096M 17.2.0 e1d6a67b021e 15e2e23b506b 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (5m) 3m ago 6m 43.3M 4096M 17.2.0 e1d6a67b021e fe41cd2240dc 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (5m) 3m ago 5m 44.3M 4096M 17.2.0 e1d6a67b021e b07b01a0b5aa 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (5m) 3m ago 5m 46.2M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (5m) 3m ago 5m 43.8M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (5m) 3m ago 5m 44.1M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (4m) 3m ago 4m 44.2M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (3m) 3m ago 4m 38.0M - 514e6a882f6e 58ae57f001a5 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (3m) 3m ago 3m 81.5M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (3m) 3m ago 3m 81.2M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (3m) 3m ago 3m 81.5M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:36:48.821 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (3m) 3m ago 3m 81.3M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:36:49.035 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:36:49.035 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:36:49.035 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T14:36:49.035 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: "mds": {}, 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 17 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:36:49.036 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:36:49.223 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:36:49.223 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:36:49.223 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:36:49.223 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T14:36:49.223 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "0/23 daemons upgraded", 2026-03-09T14:36:49.223 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Pulling quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image on host vm11" 2026-03-09T14:36:49.223 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:36:49.448 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:36:49.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:49 vm07 bash[22585]: audit 2026-03-09T14:36:48.463495+0000 mgr.y (mgr.24310) 225 : audit [DBG] from='client.14958 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:49.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:49 vm07 bash[22585]: audit 2026-03-09T14:36:48.643498+0000 mgr.y (mgr.24310) 226 : audit [DBG] from='client.14964 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:49.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:49 vm07 bash[22585]: audit 2026-03-09T14:36:49.037687+0000 mon.b (mon.2) 105 : audit [DBG] from='client.? 192.168.123.107:0/2321456469' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:36:49.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:49 vm07 bash[17480]: audit 2026-03-09T14:36:48.463495+0000 mgr.y (mgr.24310) 225 : audit [DBG] from='client.14958 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:49.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:49 vm07 bash[17480]: audit 2026-03-09T14:36:48.643498+0000 mgr.y (mgr.24310) 226 : audit [DBG] from='client.14964 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:49.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:49 vm07 bash[17480]: audit 2026-03-09T14:36:49.037687+0000 mon.b (mon.2) 105 : audit [DBG] from='client.? 192.168.123.107:0/2321456469' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:36:50.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:49 vm11 bash[17885]: audit 2026-03-09T14:36:48.463495+0000 mgr.y (mgr.24310) 225 : audit [DBG] from='client.14958 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:50.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:49 vm11 bash[17885]: audit 2026-03-09T14:36:48.643498+0000 mgr.y (mgr.24310) 226 : audit [DBG] from='client.14964 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:50.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:49 vm11 bash[17885]: audit 2026-03-09T14:36:49.037687+0000 mon.b (mon.2) 105 : audit [DBG] from='client.? 192.168.123.107:0/2321456469' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:36:50.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:50 vm07 bash[22585]: audit 2026-03-09T14:36:48.820494+0000 mgr.y (mgr.24310) 227 : audit [DBG] from='client.14970 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:50.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:50 vm07 bash[22585]: cluster 2026-03-09T14:36:49.064038+0000 mgr.y (mgr.24310) 228 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:50.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:50 vm07 bash[22585]: audit 2026-03-09T14:36:49.227054+0000 mgr.y (mgr.24310) 229 : audit [DBG] from='client.24880 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:50.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:50 vm07 bash[22585]: audit 2026-03-09T14:36:49.453139+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.107:0/3428751142' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:36:50.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:50 vm07 bash[17480]: audit 2026-03-09T14:36:48.820494+0000 mgr.y (mgr.24310) 227 : audit [DBG] from='client.14970 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:50.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:50 vm07 bash[17480]: cluster 2026-03-09T14:36:49.064038+0000 mgr.y (mgr.24310) 228 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:50.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:50 vm07 bash[17480]: audit 2026-03-09T14:36:49.227054+0000 mgr.y (mgr.24310) 229 : audit [DBG] from='client.24880 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:50.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:50 vm07 bash[17480]: audit 2026-03-09T14:36:49.453139+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.107:0/3428751142' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:36:51.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:50 vm11 bash[17885]: audit 2026-03-09T14:36:48.820494+0000 mgr.y (mgr.24310) 227 : audit [DBG] from='client.14970 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:51.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:50 vm11 bash[17885]: cluster 2026-03-09T14:36:49.064038+0000 mgr.y (mgr.24310) 228 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:51.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:50 vm11 bash[17885]: audit 2026-03-09T14:36:49.227054+0000 mgr.y (mgr.24310) 229 : audit [DBG] from='client.24880 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:36:51.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:50 vm11 bash[17885]: audit 2026-03-09T14:36:49.453139+0000 mon.c (mon.1) 37 : audit [DBG] from='client.? 192.168.123.107:0/3428751142' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:36:52.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:51 vm11 bash[17885]: cluster 2026-03-09T14:36:51.064387+0000 mgr.y (mgr.24310) 230 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:52.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:51 vm07 bash[22585]: cluster 2026-03-09T14:36:51.064387+0000 mgr.y (mgr.24310) 230 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:52.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:51 vm07 bash[17480]: cluster 2026-03-09T14:36:51.064387+0000 mgr.y (mgr.24310) 230 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:52.908 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:36:52 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:52] "GET /metrics HTTP/1.1" 200 214377 "" "Prometheus/2.33.4" 2026-03-09T14:36:53.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:36:53 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:36:53] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:36:53.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:53 vm07 bash[42609]: level=error ts=2026-03-09T14:36:53.523Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:36:53.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:53 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:53.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:36:53.909 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:36:53 vm07 bash[42609]: level=warn ts=2026-03-09T14:36:53.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:36:54.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:54 vm07 bash[22585]: cluster 2026-03-09T14:36:53.064732+0000 mgr.y (mgr.24310) 231 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:54.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:54 vm07 bash[17480]: cluster 2026-03-09T14:36:53.064732+0000 mgr.y (mgr.24310) 231 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:54.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:54 vm11 bash[17885]: cluster 2026-03-09T14:36:53.064732+0000 mgr.y (mgr.24310) 231 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:56.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:56 vm07 bash[22585]: cluster 2026-03-09T14:36:55.065333+0000 mgr.y (mgr.24310) 232 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:56.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:56 vm07 bash[17480]: cluster 2026-03-09T14:36:55.065333+0000 mgr.y (mgr.24310) 232 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:56.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:56 vm11 bash[17885]: cluster 2026-03-09T14:36:55.065333+0000 mgr.y (mgr.24310) 232 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:36:58.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:58 vm07 bash[22585]: cluster 2026-03-09T14:36:57.065677+0000 mgr.y (mgr.24310) 233 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:58.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:36:58 vm07 bash[22585]: audit 2026-03-09T14:36:57.476890+0000 mgr.y (mgr.24310) 234 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:58.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:58 vm07 bash[17480]: cluster 2026-03-09T14:36:57.065677+0000 mgr.y (mgr.24310) 233 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:58.409 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:36:58 vm07 bash[17480]: audit 2026-03-09T14:36:57.476890+0000 mgr.y (mgr.24310) 234 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:36:58.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:58 vm11 bash[17885]: cluster 2026-03-09T14:36:57.065677+0000 mgr.y (mgr.24310) 233 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:36:58.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:36:58 vm11 bash[17885]: audit 2026-03-09T14:36:57.476890+0000 mgr.y (mgr.24310) 234 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:00.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:00 vm07 bash[22585]: cluster 2026-03-09T14:36:59.066404+0000 mgr.y (mgr.24310) 235 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:00.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:00 vm07 bash[17480]: cluster 2026-03-09T14:36:59.066404+0000 mgr.y (mgr.24310) 235 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:00.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:00 vm11 bash[17885]: cluster 2026-03-09T14:36:59.066404+0000 mgr.y (mgr.24310) 235 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:02.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:01 vm11 bash[17885]: cluster 2026-03-09T14:37:01.066888+0000 mgr.y (mgr.24310) 236 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:02.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:01 vm07 bash[22585]: cluster 2026-03-09T14:37:01.066888+0000 mgr.y (mgr.24310) 236 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:02.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:01 vm07 bash[17480]: cluster 2026-03-09T14:37:01.066888+0000 mgr.y (mgr.24310) 236 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:02.908 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:02 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:37:02] "GET /metrics HTTP/1.1" 200 214370 "" "Prometheus/2.33.4" 2026-03-09T14:37:03.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:03 vm11 bash[18539]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:37:03] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:37:03.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:03 vm07 bash[42609]: level=error ts=2026-03-09T14:37:03.524Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:37:03.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:03 vm07 bash[42609]: level=warn ts=2026-03-09T14:37:03.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:37:03.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:03 vm07 bash[42609]: level=warn ts=2026-03-09T14:37:03.526Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:37:04.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:04 vm07 bash[22585]: cluster 2026-03-09T14:37:03.067217+0000 mgr.y (mgr.24310) 237 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:04.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:04 vm07 bash[17480]: cluster 2026-03-09T14:37:03.067217+0000 mgr.y (mgr.24310) 237 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:05.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:04 vm11 bash[17885]: cluster 2026-03-09T14:37:03.067217+0000 mgr.y (mgr.24310) 237 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:07.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:06 vm11 bash[17885]: cluster 2026-03-09T14:37:05.067893+0000 mgr.y (mgr.24310) 238 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:07.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:06 vm07 bash[22585]: cluster 2026-03-09T14:37:05.067893+0000 mgr.y (mgr.24310) 238 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:07.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:06 vm07 bash[17480]: cluster 2026-03-09T14:37:05.067893+0000 mgr.y (mgr.24310) 238 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:08.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:07 vm07 bash[22585]: cluster 2026-03-09T14:37:07.068232+0000 mgr.y (mgr.24310) 239 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:08.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:07 vm07 bash[22585]: audit 2026-03-09T14:37:07.484462+0000 mgr.y (mgr.24310) 240 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:08.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:07 vm07 bash[17480]: cluster 2026-03-09T14:37:07.068232+0000 mgr.y (mgr.24310) 239 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:08.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:07 vm07 bash[17480]: audit 2026-03-09T14:37:07.484462+0000 mgr.y (mgr.24310) 240 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:08.657 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:07 vm11 bash[17885]: cluster 2026-03-09T14:37:07.068232+0000 mgr.y (mgr.24310) 239 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:08.657 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:07 vm11 bash[17885]: audit 2026-03-09T14:37:07.484462+0000 mgr.y (mgr.24310) 240 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:09.620 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.620 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: Stopping Ceph mgr.x for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:37:09.620 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.620 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.621 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.621 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.621 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.621 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.621 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.621 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.901 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.901 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.902 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.902 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:09 vm11 bash[37499]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mgr.x 2026-03-09T14:37:09.902 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:09 vm11 bash[37506]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mgr-x 2026-03-09T14:37:09.902 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-09T14:37:09.902 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:09 vm11 bash[37538]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mgr.x 2026-03-09T14:37:09.902 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.x.service: Failed with result 'exit-code'. 2026-03-09T14:37:09.902 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: Stopped Ceph mgr.x for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:09.902 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.902 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.902 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.902 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.902 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:09.902 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:10.160 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:09 vm11 systemd[1]: Started Ceph mgr.x for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:10.160 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:10 vm11 bash[37598]: debug 2026-03-09T14:37:10.121+0000 7f9f2874a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: cluster 2026-03-09T14:37:09.068711+0000 mgr.y (mgr.24310) 241 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: cephadm 2026-03-09T14:37:09.192524+0000 mgr.y (mgr.24310) 242 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: audit 2026-03-09T14:37:09.196226+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: audit 2026-03-09T14:37:09.196299+0000 mon.b (mon.2) 106 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: audit 2026-03-09T14:37:09.197172+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: audit 2026-03-09T14:37:09.197710+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: audit 2026-03-09T14:37:09.199069+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: cephadm 2026-03-09T14:37:09.200661+0000 mgr.y (mgr.24310) 243 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: audit 2026-03-09T14:37:09.932327+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: audit 2026-03-09T14:37:09.932800+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:10 vm07 bash[22585]: audit 2026-03-09T14:37:09.933006+0000 mon.b (mon.2) 110 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: cluster 2026-03-09T14:37:09.068711+0000 mgr.y (mgr.24310) 241 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:10.196 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: cephadm 2026-03-09T14:37:09.192524+0000 mgr.y (mgr.24310) 242 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: cluster 2026-03-09T14:37:09.068711+0000 mgr.y (mgr.24310) 241 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: cephadm 2026-03-09T14:37:09.192524+0000 mgr.y (mgr.24310) 242 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: audit 2026-03-09T14:37:09.196226+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: audit 2026-03-09T14:37:09.196299+0000 mon.b (mon.2) 106 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: audit 2026-03-09T14:37:09.197172+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: audit 2026-03-09T14:37:09.197710+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: audit 2026-03-09T14:37:09.199069+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: cephadm 2026-03-09T14:37:09.200661+0000 mgr.y (mgr.24310) 243 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: audit 2026-03-09T14:37:09.932327+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: audit 2026-03-09T14:37:09.932800+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:10.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:10 vm11 bash[17885]: audit 2026-03-09T14:37:09.933006+0000 mon.b (mon.2) 110 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:37:10.505 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:10 vm11 bash[37598]: debug 2026-03-09T14:37:10.157+0000 7f9f2874a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:37:10.505 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:10 vm11 bash[37598]: debug 2026-03-09T14:37:10.281+0000 7f9f2874a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:37:10.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: audit 2026-03-09T14:37:09.196226+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:10.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: audit 2026-03-09T14:37:09.196299+0000 mon.b (mon.2) 106 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:10.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: audit 2026-03-09T14:37:09.197172+0000 mon.b (mon.2) 107 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:37:10.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: audit 2026-03-09T14:37:09.197710+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:10.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: audit 2026-03-09T14:37:09.199069+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:10.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: cephadm 2026-03-09T14:37:09.200661+0000 mgr.y (mgr.24310) 243 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-09T14:37:10.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: audit 2026-03-09T14:37:09.932327+0000 mon.b (mon.2) 109 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:10.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: audit 2026-03-09T14:37:09.932800+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:10.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:10 vm07 bash[17480]: audit 2026-03-09T14:37:09.933006+0000 mon.b (mon.2) 110 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:37:11.004 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:10 vm11 bash[37598]: debug 2026-03-09T14:37:10.561+0000 7f9f2874a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:37:11.391 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: debug 2026-03-09T14:37:11.021+0000 7f9f2874a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:37:11.391 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: debug 2026-03-09T14:37:11.113+0000 7f9f2874a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:37:11.391 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:37:11.391 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:37:11.391 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: from numpy import show_config as show_numpy_config 2026-03-09T14:37:11.391 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: debug 2026-03-09T14:37:11.241+0000 7f9f2874a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:37:11.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: debug 2026-03-09T14:37:11.389+0000 7f9f2874a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:37:11.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: debug 2026-03-09T14:37:11.433+0000 7f9f2874a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:37:11.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: debug 2026-03-09T14:37:11.473+0000 7f9f2874a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:37:11.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: debug 2026-03-09T14:37:11.513+0000 7f9f2874a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:37:11.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:11 vm11 bash[37598]: debug 2026-03-09T14:37:11.565+0000 7f9f2874a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:37:11.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:11 vm07 bash[42609]: level=warn ts=2026-03-09T14:37:11.641Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=6 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": dial tcp 192.168.123.111:8443: connect: connection refused" 2026-03-09T14:37:12.244 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:11 vm11 bash[17885]: cluster 2026-03-09T14:37:11.069090+0000 mgr.y (mgr.24310) 244 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:12.244 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.001+0000 7f9f2874a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:37:12.245 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.041+0000 7f9f2874a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:37:12.245 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.093+0000 7f9f2874a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:37:12.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:11 vm07 bash[22585]: cluster 2026-03-09T14:37:11.069090+0000 mgr.y (mgr.24310) 244 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:12.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:11 vm07 bash[17480]: cluster 2026-03-09T14:37:11.069090+0000 mgr.y (mgr.24310) 244 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:12.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.241+0000 7f9f2874a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:37:12.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.285+0000 7f9f2874a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:37:12.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.325+0000 7f9f2874a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:37:12.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.433+0000 7f9f2874a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:37:12.853 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.593+0000 7f9f2874a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:37:12.853 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.813+0000 7f9f2874a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:37:12.908 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:12 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:37:12] "GET /metrics HTTP/1.1" 200 214376 "" "Prometheus/2.33.4" 2026-03-09T14:37:13.126 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.849+0000 7f9f2874a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:37:13.126 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:12 vm11 bash[37598]: debug 2026-03-09T14:37:12.909+0000 7f9f2874a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:37:13.126 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:13 vm11 bash[37598]: debug 2026-03-09T14:37:13.121+0000 7f9f2874a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:37:13.383 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:13 vm11 bash[37598]: debug 2026-03-09T14:37:13.377+0000 7f9f2874a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:37:13.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:13 vm11 bash[37598]: [09/Mar/2026:14:37:13] ENGINE Bus STARTING 2026-03-09T14:37:13.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:13 vm11 bash[37598]: CherryPy Checker: 2026-03-09T14:37:13.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:13 vm11 bash[37598]: The Application mounted at '' has an empty config. 2026-03-09T14:37:13.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:13 vm11 bash[37598]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:37:13] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:37:13.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:13 vm11 bash[37598]: [09/Mar/2026:14:37:13] ENGINE Serving on http://:::9283 2026-03-09T14:37:13.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:13 vm11 bash[37598]: [09/Mar/2026:14:37:13] ENGINE Bus STARTED 2026-03-09T14:37:13.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:13 vm07 bash[42609]: level=error ts=2026-03-09T14:37:13.525Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": dial tcp 192.168.123.111:8443: connect: connection refused; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:37:13.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:13 vm07 bash[42609]: level=warn ts=2026-03-09T14:37:13.527Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:37:13.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:13 vm07 bash[42609]: level=warn ts=2026-03-09T14:37:13.529Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:37:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:14 vm11 bash[17885]: cluster 2026-03-09T14:37:13.069470+0000 mgr.y (mgr.24310) 245 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:14 vm11 bash[17885]: audit 2026-03-09T14:37:13.256319+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:14 vm11 bash[17885]: cluster 2026-03-09T14:37:13.390216+0000 mon.a (mon.0) 778 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:37:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:14 vm11 bash[17885]: cluster 2026-03-09T14:37:13.390425+0000 mon.a (mon.0) 779 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:37:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:14 vm11 bash[17885]: audit 2026-03-09T14:37:13.395330+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:37:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:14 vm11 bash[17885]: audit 2026-03-09T14:37:13.395696+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:37:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:14 vm11 bash[17885]: audit 2026-03-09T14:37:13.400451+0000 mon.c (mon.1) 40 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:37:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:14 vm11 bash[17885]: audit 2026-03-09T14:37:13.400713+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:37:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:14 vm11 bash[17885]: audit 2026-03-09T14:37:13.528855+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:14 vm07 bash[17480]: cluster 2026-03-09T14:37:13.069470+0000 mgr.y (mgr.24310) 245 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:14 vm07 bash[17480]: audit 2026-03-09T14:37:13.256319+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:14 vm07 bash[17480]: cluster 2026-03-09T14:37:13.390216+0000 mon.a (mon.0) 778 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:14 vm07 bash[17480]: cluster 2026-03-09T14:37:13.390425+0000 mon.a (mon.0) 779 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:14 vm07 bash[17480]: audit 2026-03-09T14:37:13.395330+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:14 vm07 bash[17480]: audit 2026-03-09T14:37:13.395696+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:14 vm07 bash[17480]: audit 2026-03-09T14:37:13.400451+0000 mon.c (mon.1) 40 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:14 vm07 bash[17480]: audit 2026-03-09T14:37:13.400713+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:14 vm07 bash[17480]: audit 2026-03-09T14:37:13.528855+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:14 vm07 bash[22585]: cluster 2026-03-09T14:37:13.069470+0000 mgr.y (mgr.24310) 245 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:14 vm07 bash[22585]: audit 2026-03-09T14:37:13.256319+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:14 vm07 bash[22585]: cluster 2026-03-09T14:37:13.390216+0000 mon.a (mon.0) 778 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:14 vm07 bash[22585]: cluster 2026-03-09T14:37:13.390425+0000 mon.a (mon.0) 779 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:14 vm07 bash[22585]: audit 2026-03-09T14:37:13.395330+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:14 vm07 bash[22585]: audit 2026-03-09T14:37:13.395696+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:14 vm07 bash[22585]: audit 2026-03-09T14:37:13.400451+0000 mon.c (mon.1) 40 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:14 vm07 bash[22585]: audit 2026-03-09T14:37:13.400713+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.? 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:37:14.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:14 vm07 bash[22585]: audit 2026-03-09T14:37:13.528855+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:15.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:15 vm07 bash[22585]: cluster 2026-03-09T14:37:14.279311+0000 mon.a (mon.0) 781 : cluster [DBG] mgrmap e22: y(active, since 4m), standbys: x 2026-03-09T14:37:15.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:15 vm07 bash[17480]: cluster 2026-03-09T14:37:14.279311+0000 mon.a (mon.0) 781 : cluster [DBG] mgrmap e22: y(active, since 4m), standbys: x 2026-03-09T14:37:15.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:15 vm11 bash[17885]: cluster 2026-03-09T14:37:14.279311+0000 mon.a (mon.0) 781 : cluster [DBG] mgrmap e22: y(active, since 4m), standbys: x 2026-03-09T14:37:16.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:16 vm11 bash[17885]: cluster 2026-03-09T14:37:15.069994+0000 mgr.y (mgr.24310) 246 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:16.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:16 vm07 bash[22585]: cluster 2026-03-09T14:37:15.069994+0000 mgr.y (mgr.24310) 246 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:16.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:16 vm07 bash[17480]: cluster 2026-03-09T14:37:15.069994+0000 mgr.y (mgr.24310) 246 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:18 vm07 bash[22585]: cluster 2026-03-09T14:37:17.070282+0000 mgr.y (mgr.24310) 247 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:18 vm07 bash[22585]: audit 2026-03-09T14:37:17.489952+0000 mgr.y (mgr.24310) 248 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:18 vm07 bash[22585]: audit 2026-03-09T14:37:18.016861+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:18 vm07 bash[22585]: audit 2026-03-09T14:37:18.023474+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:18 vm07 bash[22585]: audit 2026-03-09T14:37:18.023827+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:18 vm07 bash[22585]: audit 2026-03-09T14:37:18.028127+0000 mon.b (mon.2) 112 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:18 vm07 bash[22585]: audit 2026-03-09T14:37:18.030982+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:18 vm07 bash[22585]: cluster 2026-03-09T14:37:18.038648+0000 mon.a (mon.0) 785 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:18 vm07 bash[22585]: cluster 2026-03-09T14:37:18.095879+0000 mon.a (mon.0) 786 : cluster [DBG] Standby manager daemon y started 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:18 vm07 bash[17480]: cluster 2026-03-09T14:37:17.070282+0000 mgr.y (mgr.24310) 247 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:18 vm07 bash[17480]: audit 2026-03-09T14:37:17.489952+0000 mgr.y (mgr.24310) 248 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:18 vm07 bash[17480]: audit 2026-03-09T14:37:18.016861+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:18 vm07 bash[17480]: audit 2026-03-09T14:37:18.023474+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:18 vm07 bash[17480]: audit 2026-03-09T14:37:18.023827+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:18 vm07 bash[17480]: audit 2026-03-09T14:37:18.028127+0000 mon.b (mon.2) 112 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:18 vm07 bash[17480]: audit 2026-03-09T14:37:18.030982+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:18 vm07 bash[17480]: cluster 2026-03-09T14:37:18.038648+0000 mon.a (mon.0) 785 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T14:37:18.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:18 vm07 bash[17480]: cluster 2026-03-09T14:37:18.095879+0000 mon.a (mon.0) 786 : cluster [DBG] Standby manager daemon y started 2026-03-09T14:37:18.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:18 vm11 bash[17885]: cluster 2026-03-09T14:37:17.070282+0000 mgr.y (mgr.24310) 247 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:18.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:18 vm11 bash[17885]: audit 2026-03-09T14:37:17.489952+0000 mgr.y (mgr.24310) 248 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:18.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:18 vm11 bash[17885]: audit 2026-03-09T14:37:18.016861+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:18.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:18 vm11 bash[17885]: audit 2026-03-09T14:37:18.023474+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-09T14:37:18.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:18 vm11 bash[17885]: audit 2026-03-09T14:37:18.023827+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:37:18.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:18 vm11 bash[17885]: audit 2026-03-09T14:37:18.028127+0000 mon.b (mon.2) 112 : audit [INF] from='mgr.24310 192.168.123.107:0/870653349' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-09T14:37:18.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:18 vm11 bash[17885]: audit 2026-03-09T14:37:18.030982+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-09T14:37:18.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:18 vm11 bash[17885]: cluster 2026-03-09T14:37:18.038648+0000 mon.a (mon.0) 785 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-09T14:37:18.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:18 vm11 bash[17885]: cluster 2026-03-09T14:37:18.095879+0000 mon.a (mon.0) 786 : cluster [DBG] Standby manager daemon y started 2026-03-09T14:37:19.288 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[42609]: level=warn ts=2026-03-09T14:37:19.276Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=5 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": dial tcp 192.168.123.107:8443: connect: connection refused" 2026-03-09T14:37:19.288 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:19 vm07 bash[17785]: debug 2026-03-09T14:37:19.034+0000 7f9d4d5bb700 -1 mgr handle_mgr_map I was active but no longer am 2026-03-09T14:37:19.288 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:19 vm07 bash[17785]: ignoring --setuser ceph since I am not root 2026-03-09T14:37:19.288 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:19 vm07 bash[17785]: ignoring --setgroup ceph since I am not root 2026-03-09T14:37:19.288 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:19 vm07 bash[17785]: debug 2026-03-09T14:37:19.162+0000 7fae36620000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:37:19.288 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:19 vm07 bash[17785]: debug 2026-03-09T14:37:19.210+0000 7fae36620000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:37:19.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:19 vm11 bash[37598]: [09/Mar/2026:14:37:19] ENGINE Bus STOPPING 2026-03-09T14:37:19.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: cephadm 2026-03-09T14:37:18.028640+0000 mgr.y (mgr.24310) 249 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-09T14:37:19.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: cephadm 2026-03-09T14:37:18.030500+0000 mgr.y (mgr.24310) 250 : cephadm [INF] Failing over to other MGR 2026-03-09T14:37:19.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.039093+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-09T14:37:19.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: cluster 2026-03-09T14:37:19.042495+0000 mon.a (mon.0) 788 : cluster [DBG] mgrmap e23: x(active, starting, since 1.0101s), standbys: y 2026-03-09T14:37:19.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.049256+0000 mon.c (mon.1) 42 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.049546+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.049852+0000 mon.c (mon.1) 44 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.050355+0000 mon.c (mon.1) 45 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.050688+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.051387+0000 mon.c (mon.1) 47 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.051783+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.052176+0000 mon.c (mon.1) 49 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.052564+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.052880+0000 mon.c (mon.1) 51 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.053180+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.053471+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.053791+0000 mon.c (mon.1) 54 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.054378+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.054725+0000 mon.c (mon.1) 56 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:37:19.505 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:19 vm11 bash[17885]: audit 2026-03-09T14:37:19.055217+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: cephadm 2026-03-09T14:37:18.028640+0000 mgr.y (mgr.24310) 249 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: cephadm 2026-03-09T14:37:18.030500+0000 mgr.y (mgr.24310) 250 : cephadm [INF] Failing over to other MGR 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.039093+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: cluster 2026-03-09T14:37:19.042495+0000 mon.a (mon.0) 788 : cluster [DBG] mgrmap e23: x(active, starting, since 1.0101s), standbys: y 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.049256+0000 mon.c (mon.1) 42 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.049546+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.049852+0000 mon.c (mon.1) 44 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.050355+0000 mon.c (mon.1) 45 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.050688+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.051387+0000 mon.c (mon.1) 47 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.051783+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.052176+0000 mon.c (mon.1) 49 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.052564+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.052880+0000 mon.c (mon.1) 51 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.053180+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.053471+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.053791+0000 mon.c (mon.1) 54 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.054378+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.054725+0000 mon.c (mon.1) 56 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:19 vm07 bash[22585]: audit 2026-03-09T14:37:19.055217+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: cephadm 2026-03-09T14:37:18.028640+0000 mgr.y (mgr.24310) 249 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.y) 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: cephadm 2026-03-09T14:37:18.030500+0000 mgr.y (mgr.24310) 250 : cephadm [INF] Failing over to other MGR 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.039093+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: cluster 2026-03-09T14:37:19.042495+0000 mon.a (mon.0) 788 : cluster [DBG] mgrmap e23: x(active, starting, since 1.0101s), standbys: y 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.049256+0000 mon.c (mon.1) 42 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.049546+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.049852+0000 mon.c (mon.1) 44 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.050355+0000 mon.c (mon.1) 45 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.050688+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:37:19.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.051387+0000 mon.c (mon.1) 47 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.051783+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.052176+0000 mon.c (mon.1) 49 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.052564+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.052880+0000 mon.c (mon.1) 51 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.053180+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.053471+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.053791+0000 mon.c (mon.1) 54 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.054378+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.054725+0000 mon.c (mon.1) 56 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:19 vm07 bash[17480]: audit 2026-03-09T14:37:19.055217+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:37:19.659 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:19 vm07 bash[17785]: debug 2026-03-09T14:37:19.534+0000 7fae36620000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:37:19.811 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:19 vm11 bash[37598]: [09/Mar/2026:14:37:19] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:37:19.811 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:19 vm11 bash[37598]: [09/Mar/2026:14:37:19] ENGINE Bus STOPPED 2026-03-09T14:37:19.811 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:19 vm11 bash[37598]: [09/Mar/2026:14:37:19] ENGINE Bus STARTING 2026-03-09T14:37:20.071 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:20 vm07 bash[17785]: debug 2026-03-09T14:37:20.070+0000 7fae36620000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:37:20.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:19 vm11 bash[37598]: [09/Mar/2026:14:37:19] ENGINE Serving on http://:::9283 2026-03-09T14:37:20.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:19 vm11 bash[37598]: [09/Mar/2026:14:37:19] ENGINE Bus STARTED 2026-03-09T14:37:20.353 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: cluster 2026-03-09T14:37:19.520699+0000 mon.a (mon.0) 789 : cluster [INF] Manager daemon x is now available 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: audit 2026-03-09T14:37:19.532783+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: cephadm 2026-03-09T14:37:19.533131+0000 mgr.x (mgr.24889) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: audit 2026-03-09T14:37:19.539861+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: cephadm 2026-03-09T14:37:19.542469+0000 mgr.x (mgr.24889) 2 : cephadm [INF] Queued rgw.smpl for migration 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: cephadm 2026-03-09T14:37:19.542815+0000 mgr.x (mgr.24889) 3 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: cephadm 2026-03-09T14:37:19.542831+0000 mgr.x (mgr.24889) 4 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'smpl', 'service_name': 'rgw.smpl', 'service_type': 'rgw'} 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: cephadm 2026-03-09T14:37:19.551255+0000 mgr.x (mgr.24889) 5 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: cephadm 2026-03-09T14:37:19.551287+0000 mgr.x (mgr.24889) 6 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: cephadm 2026-03-09T14:37:19.551303+0000 mgr.x (mgr.24889) 7 : cephadm [INF] Migrating certs/keys for rgw.smpl spec to cert store 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: cephadm 2026-03-09T14:37:19.551383+0000 mgr.x (mgr.24889) 8 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: audit 2026-03-09T14:37:19.552488+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: audit 2026-03-09T14:37:19.560443+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: audit 2026-03-09T14:37:19.596196+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: audit 2026-03-09T14:37:19.598299+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: audit 2026-03-09T14:37:19.598645+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: audit 2026-03-09T14:37:19.663180+0000 mon.c (mon.1) 60 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:20 vm07 bash[22585]: audit 2026-03-09T14:37:19.663634+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:20 vm07 bash[17785]: debug 2026-03-09T14:37:20.170+0000 7fae36620000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: cluster 2026-03-09T14:37:19.520699+0000 mon.a (mon.0) 789 : cluster [INF] Manager daemon x is now available 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: audit 2026-03-09T14:37:19.532783+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: cephadm 2026-03-09T14:37:19.533131+0000 mgr.x (mgr.24889) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: audit 2026-03-09T14:37:19.539861+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: cephadm 2026-03-09T14:37:19.542469+0000 mgr.x (mgr.24889) 2 : cephadm [INF] Queued rgw.smpl for migration 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: cephadm 2026-03-09T14:37:19.542815+0000 mgr.x (mgr.24889) 3 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: cephadm 2026-03-09T14:37:19.542831+0000 mgr.x (mgr.24889) 4 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'smpl', 'service_name': 'rgw.smpl', 'service_type': 'rgw'} 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: cephadm 2026-03-09T14:37:19.551255+0000 mgr.x (mgr.24889) 5 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: cephadm 2026-03-09T14:37:19.551287+0000 mgr.x (mgr.24889) 6 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: cephadm 2026-03-09T14:37:19.551303+0000 mgr.x (mgr.24889) 7 : cephadm [INF] Migrating certs/keys for rgw.smpl spec to cert store 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: cephadm 2026-03-09T14:37:19.551383+0000 mgr.x (mgr.24889) 8 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: audit 2026-03-09T14:37:19.552488+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: audit 2026-03-09T14:37:19.560443+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: audit 2026-03-09T14:37:19.596196+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: audit 2026-03-09T14:37:19.598299+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: audit 2026-03-09T14:37:19.598645+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: audit 2026-03-09T14:37:19.663180+0000 mon.c (mon.1) 60 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:37:20.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:20 vm07 bash[17480]: audit 2026-03-09T14:37:19.663634+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: cluster 2026-03-09T14:37:19.520699+0000 mon.a (mon.0) 789 : cluster [INF] Manager daemon x is now available 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: audit 2026-03-09T14:37:19.532783+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: cephadm 2026-03-09T14:37:19.533131+0000 mgr.x (mgr.24889) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: audit 2026-03-09T14:37:19.539861+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: cephadm 2026-03-09T14:37:19.542469+0000 mgr.x (mgr.24889) 2 : cephadm [INF] Queued rgw.smpl for migration 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: cephadm 2026-03-09T14:37:19.542815+0000 mgr.x (mgr.24889) 3 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: cephadm 2026-03-09T14:37:19.542831+0000 mgr.x (mgr.24889) 4 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'smpl', 'service_name': 'rgw.smpl', 'service_type': 'rgw'} 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: cephadm 2026-03-09T14:37:19.551255+0000 mgr.x (mgr.24889) 5 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: cephadm 2026-03-09T14:37:19.551287+0000 mgr.x (mgr.24889) 6 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: cephadm 2026-03-09T14:37:19.551303+0000 mgr.x (mgr.24889) 7 : cephadm [INF] Migrating certs/keys for rgw.smpl spec to cert store 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: cephadm 2026-03-09T14:37:19.551383+0000 mgr.x (mgr.24889) 8 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: audit 2026-03-09T14:37:19.552488+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: audit 2026-03-09T14:37:19.560443+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: audit 2026-03-09T14:37:19.596196+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: audit 2026-03-09T14:37:19.598299+0000 mon.c (mon.1) 59 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: audit 2026-03-09T14:37:19.598645+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: audit 2026-03-09T14:37:19.663180+0000 mon.c (mon.1) 60 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:37:20.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:20 vm11 bash[17885]: audit 2026-03-09T14:37:19.663634+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-09T14:37:20.832 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:20 vm07 bash[17785]: debug 2026-03-09T14:37:20.430+0000 7fae36620000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:37:20.832 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:20 vm07 bash[17785]: debug 2026-03-09T14:37:20.542+0000 7fae36620000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:37:20.832 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:20 vm07 bash[17785]: debug 2026-03-09T14:37:20.606+0000 7fae36620000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:37:20.832 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:20 vm07 bash[17785]: debug 2026-03-09T14:37:20.830+0000 7fae36620000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (4m) 3m ago 4m 15.5M - ba2b418f427c a61514665550 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (4m) 7s ago 4m 40.9M - 8.3.5 dad864ee21e9 540326cca8f5 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (4m) 3m ago 4m 63.6M - 3.5 e1d6a67b021e 6e71f6329b43 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283 running (11s) 7s ago 7m 294M - 19.2.3-678-ge911bdeb 654f31e6858e bc02e91cc35e 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:9283 running (7m) 3m ago 7m 442M - 17.2.0 e1d6a67b021e df6605dd81b3 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (8m) 3m ago 8m 49.0M 2048M 17.2.0 e1d6a67b021e 47602ca6fae7 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (7m) 7s ago 7m 37.7M 2048M 17.2.0 e1d6a67b021e eac3b7829b01 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (7m) 3m ago 7m 45.0M 2048M 17.2.0 e1d6a67b021e 9c901130627b 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (4m) 3m ago 4m 8815k - 1dbe0e931976 10000a0b8245 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (4m) 7s ago 4m 9.78M - 1dbe0e931976 38d6b8c74501 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (7m) 3m ago 7m 46.2M 4096M 17.2.0 e1d6a67b021e 7a4a11fbf70d 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (6m) 3m ago 6m 48.3M 4096M 17.2.0 e1d6a67b021e 15e2e23b506b 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (6m) 3m ago 6m 43.3M 4096M 17.2.0 e1d6a67b021e fe41cd2240dc 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (6m) 3m ago 6m 44.3M 4096M 17.2.0 e1d6a67b021e b07b01a0b5aa 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (6m) 7s ago 6m 48.9M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (5m) 7s ago 5m 46.7M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (5m) 7s ago 5m 46.6M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (5m) 7s ago 5m 48.2M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (4m) 7s ago 4m 51.4M - 514e6a882f6e 58ae57f001a5 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (4m) 3m ago 4m 81.5M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (4m) 7s ago 4m 82.1M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (4m) 3m ago 4m 81.5M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:37:20.943 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (4m) 7s ago 4m 82.2M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:37:21.158 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:20 vm07 bash[17785]: debug 2026-03-09T14:37:20.894+0000 7fae36620000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:37:21.158 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:20 vm07 bash[17785]: debug 2026-03-09T14:37:20.966+0000 7fae36620000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 1, 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "mds": {}, 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 16, 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:37:21.183 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:37:21.385 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:37:21.385 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:37:21.385 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:37:21.385 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:37:21.385 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T14:37:21.386 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "1/23 daemons upgraded", 2026-03-09T14:37:21.386 INFO:teuthology.orchestra.run.vm07.stdout: "message": "", 2026-03-09T14:37:21.386 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:37:21.386 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: cephadm 2026-03-09T14:37:20.165112+0000 mgr.x (mgr.24889) 9 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: cluster 2026-03-09T14:37:20.322380+0000 mon.a (mon.0) 796 : cluster [DBG] mgrmap e24: x(active, since 2s), standbys: y 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: audit 2026-03-09T14:37:20.338617+0000 mgr.x (mgr.24889) 10 : audit [DBG] from='client.24901 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: cluster 2026-03-09T14:37:20.339489+0000 mgr.x (mgr.24889) 11 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: audit 2026-03-09T14:37:20.402756+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: audit 2026-03-09T14:37:20.409717+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: cephadm 2026-03-09T14:37:20.563088+0000 mgr.x (mgr.24889) 12 : cephadm [INF] Deploying cephadm binary to vm11 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: audit 2026-03-09T14:37:20.570806+0000 mgr.x (mgr.24889) 13 : audit [DBG] from='client.24916 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: audit 2026-03-09T14:37:20.940524+0000 mgr.x (mgr.24889) 14 : audit [DBG] from='client.15012 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: cephadm 2026-03-09T14:37:21.025578+0000 mgr.x (mgr.24889) 15 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Bus STARTING 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: cluster 2026-03-09T14:37:21.049941+0000 mgr.x (mgr.24889) 16 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:21 vm07 bash[22585]: audit 2026-03-09T14:37:21.188725+0000 mon.c (mon.1) 61 : audit [DBG] from='client.? 192.168.123.107:0/795495817' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:21 vm07 bash[17785]: debug 2026-03-09T14:37:21.530+0000 7fae36620000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: cephadm 2026-03-09T14:37:20.165112+0000 mgr.x (mgr.24889) 9 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: cluster 2026-03-09T14:37:20.322380+0000 mon.a (mon.0) 796 : cluster [DBG] mgrmap e24: x(active, since 2s), standbys: y 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: audit 2026-03-09T14:37:20.338617+0000 mgr.x (mgr.24889) 10 : audit [DBG] from='client.24901 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: cluster 2026-03-09T14:37:20.339489+0000 mgr.x (mgr.24889) 11 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: audit 2026-03-09T14:37:20.402756+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: audit 2026-03-09T14:37:20.409717+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: cephadm 2026-03-09T14:37:20.563088+0000 mgr.x (mgr.24889) 12 : cephadm [INF] Deploying cephadm binary to vm11 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: audit 2026-03-09T14:37:20.570806+0000 mgr.x (mgr.24889) 13 : audit [DBG] from='client.24916 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: audit 2026-03-09T14:37:20.940524+0000 mgr.x (mgr.24889) 14 : audit [DBG] from='client.15012 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:21.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: cephadm 2026-03-09T14:37:21.025578+0000 mgr.x (mgr.24889) 15 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Bus STARTING 2026-03-09T14:37:21.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: cluster 2026-03-09T14:37:21.049941+0000 mgr.x (mgr.24889) 16 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:21.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:21 vm07 bash[17480]: audit 2026-03-09T14:37:21.188725+0000 mon.c (mon.1) 61 : audit [DBG] from='client.? 192.168.123.107:0/795495817' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:37:21.628 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:37:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: cephadm 2026-03-09T14:37:20.165112+0000 mgr.x (mgr.24889) 9 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-09T14:37:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: cluster 2026-03-09T14:37:20.322380+0000 mon.a (mon.0) 796 : cluster [DBG] mgrmap e24: x(active, since 2s), standbys: y 2026-03-09T14:37:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: audit 2026-03-09T14:37:20.338617+0000 mgr.x (mgr.24889) 10 : audit [DBG] from='client.24901 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:21.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: cluster 2026-03-09T14:37:20.339489+0000 mgr.x (mgr.24889) 11 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:21.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: audit 2026-03-09T14:37:20.402756+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:21.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: audit 2026-03-09T14:37:20.409717+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:21.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: cephadm 2026-03-09T14:37:20.563088+0000 mgr.x (mgr.24889) 12 : cephadm [INF] Deploying cephadm binary to vm11 2026-03-09T14:37:21.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: audit 2026-03-09T14:37:20.570806+0000 mgr.x (mgr.24889) 13 : audit [DBG] from='client.24916 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:21.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: audit 2026-03-09T14:37:20.940524+0000 mgr.x (mgr.24889) 14 : audit [DBG] from='client.15012 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:21.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: cephadm 2026-03-09T14:37:21.025578+0000 mgr.x (mgr.24889) 15 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Bus STARTING 2026-03-09T14:37:21.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: cluster 2026-03-09T14:37:21.049941+0000 mgr.x (mgr.24889) 16 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:21.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:21 vm11 bash[17885]: audit 2026-03-09T14:37:21.188725+0000 mon.c (mon.1) 61 : audit [DBG] from='client.? 192.168.123.107:0/795495817' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:37:21.908 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:21 vm07 bash[17785]: debug 2026-03-09T14:37:21.590+0000 7fae36620000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:37:21.908 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:21 vm07 bash[17785]: debug 2026-03-09T14:37:21.646+0000 7fae36620000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:37:22.312 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:21 vm07 bash[17785]: debug 2026-03-09T14:37:21.958+0000 7fae36620000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:37:22.312 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:22 vm07 bash[17785]: debug 2026-03-09T14:37:22.018+0000 7fae36620000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:37:22.312 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:22 vm07 bash[17785]: debug 2026-03-09T14:37:22.078+0000 7fae36620000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:37:22.312 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:22 vm07 bash[17785]: debug 2026-03-09T14:37:22.162+0000 7fae36620000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:22 vm07 bash[22585]: cephadm 2026-03-09T14:37:21.127107+0000 mgr.x (mgr.24889) 17 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Serving on http://192.168.123.111:8765 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:22 vm07 bash[22585]: cephadm 2026-03-09T14:37:21.239451+0000 mgr.x (mgr.24889) 18 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Serving on https://192.168.123.111:7150 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:22 vm07 bash[22585]: cephadm 2026-03-09T14:37:21.239497+0000 mgr.x (mgr.24889) 19 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Bus STARTED 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:22 vm07 bash[22585]: cephadm 2026-03-09T14:37:21.239831+0000 mgr.x (mgr.24889) 20 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Client ('192.168.123.111', 58380) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:22 vm07 bash[22585]: audit 2026-03-09T14:37:21.387909+0000 mgr.x (mgr.24889) 21 : audit [DBG] from='client.15018 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:22 vm07 bash[22585]: audit 2026-03-09T14:37:21.633999+0000 mon.c (mon.1) 62 : audit [DBG] from='client.? 192.168.123.107:0/3838848236' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:22 vm07 bash[17480]: cephadm 2026-03-09T14:37:21.127107+0000 mgr.x (mgr.24889) 17 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Serving on http://192.168.123.111:8765 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:22 vm07 bash[17480]: cephadm 2026-03-09T14:37:21.239451+0000 mgr.x (mgr.24889) 18 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Serving on https://192.168.123.111:7150 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:22 vm07 bash[17480]: cephadm 2026-03-09T14:37:21.239497+0000 mgr.x (mgr.24889) 19 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Bus STARTED 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:22 vm07 bash[17480]: cephadm 2026-03-09T14:37:21.239831+0000 mgr.x (mgr.24889) 20 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Client ('192.168.123.111', 58380) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:22 vm07 bash[17480]: audit 2026-03-09T14:37:21.387909+0000 mgr.x (mgr.24889) 21 : audit [DBG] from='client.15018 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:22 vm07 bash[17480]: audit 2026-03-09T14:37:21.633999+0000 mon.c (mon.1) 62 : audit [DBG] from='client.? 192.168.123.107:0/3838848236' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:37:22.658 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:22 vm07 bash[17785]: debug 2026-03-09T14:37:22.502+0000 7fae36620000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:37:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:22 vm11 bash[17885]: cephadm 2026-03-09T14:37:21.127107+0000 mgr.x (mgr.24889) 17 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Serving on http://192.168.123.111:8765 2026-03-09T14:37:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:22 vm11 bash[17885]: cephadm 2026-03-09T14:37:21.239451+0000 mgr.x (mgr.24889) 18 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Serving on https://192.168.123.111:7150 2026-03-09T14:37:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:22 vm11 bash[17885]: cephadm 2026-03-09T14:37:21.239497+0000 mgr.x (mgr.24889) 19 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Bus STARTED 2026-03-09T14:37:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:22 vm11 bash[17885]: cephadm 2026-03-09T14:37:21.239831+0000 mgr.x (mgr.24889) 20 : cephadm [INF] [09/Mar/2026:14:37:21] ENGINE Client ('192.168.123.111', 58380) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:37:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:22 vm11 bash[17885]: audit 2026-03-09T14:37:21.387909+0000 mgr.x (mgr.24889) 21 : audit [DBG] from='client.15018 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:22 vm11 bash[17885]: audit 2026-03-09T14:37:21.633999+0000 mon.c (mon.1) 62 : audit [DBG] from='client.? 192.168.123.107:0/3838848236' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:37:22.957 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:22 vm07 bash[17785]: debug 2026-03-09T14:37:22.686+0000 7fae36620000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:37:22.957 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:22 vm07 bash[17785]: debug 2026-03-09T14:37:22.746+0000 7fae36620000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:37:22.957 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:22 vm07 bash[17785]: debug 2026-03-09T14:37:22.806+0000 7fae36620000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:37:23.330 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:22 vm07 bash[17785]: debug 2026-03-09T14:37:22.958+0000 7fae36620000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:37:23.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:23 vm07 bash[22585]: cluster 2026-03-09T14:37:22.333303+0000 mon.a (mon.0) 799 : cluster [DBG] mgrmap e25: x(active, since 4s), standbys: y 2026-03-09T14:37:23.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:23 vm07 bash[22585]: cluster 2026-03-09T14:37:23.050292+0000 mgr.x (mgr.24889) 22 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:23.658 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:23 vm07 bash[42609]: level=error ts=2026-03-09T14:37:23.526Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": dial tcp 192.168.123.107:8443: connect: connection refused; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:37:23.658 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:23 vm07 bash[42609]: level=warn ts=2026-03-09T14:37:23.527Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:37:23.658 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:23 vm07 bash[42609]: level=warn ts=2026-03-09T14:37:23.528Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:37:23.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:23 vm07 bash[17480]: cluster 2026-03-09T14:37:22.333303+0000 mon.a (mon.0) 799 : cluster [DBG] mgrmap e25: x(active, since 4s), standbys: y 2026-03-09T14:37:23.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:23 vm07 bash[17480]: cluster 2026-03-09T14:37:23.050292+0000 mgr.x (mgr.24889) 22 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:23.658 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:23 vm07 bash[17785]: debug 2026-03-09T14:37:23.450+0000 7fae36620000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:37:23.658 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:23 vm07 bash[17785]: [09/Mar/2026:14:37:23] ENGINE Bus STARTING 2026-03-09T14:37:23.658 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:23 vm07 bash[17785]: CherryPy Checker: 2026-03-09T14:37:23.658 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:23 vm07 bash[17785]: The Application mounted at '' has an empty config. 2026-03-09T14:37:23.658 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:23 vm07 bash[17785]: [09/Mar/2026:14:37:23] ENGINE Serving on http://:::9283 2026-03-09T14:37:23.658 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:23 vm07 bash[17785]: [09/Mar/2026:14:37:23] ENGINE Bus STARTED 2026-03-09T14:37:23.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:23 vm11 bash[37598]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:37:23] "GET /metrics HTTP/1.1" 200 34971 "" "Prometheus/2.33.4" 2026-03-09T14:37:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:23 vm11 bash[17885]: cluster 2026-03-09T14:37:22.333303+0000 mon.a (mon.0) 799 : cluster [DBG] mgrmap e25: x(active, since 4s), standbys: y 2026-03-09T14:37:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:23 vm11 bash[17885]: cluster 2026-03-09T14:37:23.050292+0000 mgr.x (mgr.24889) 22 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:24.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:24 vm07 bash[22585]: cluster 2026-03-09T14:37:23.457375+0000 mon.a (mon.0) 800 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:24 vm07 bash[22585]: cluster 2026-03-09T14:37:23.457470+0000 mon.a (mon.0) 801 : cluster [DBG] Standby manager daemon y started 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:24 vm07 bash[22585]: audit 2026-03-09T14:37:23.459571+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:24 vm07 bash[22585]: audit 2026-03-09T14:37:23.462245+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:24 vm07 bash[22585]: audit 2026-03-09T14:37:23.463220+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:24 vm07 bash[22585]: audit 2026-03-09T14:37:23.463799+0000 mon.c (mon.1) 66 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:24 vm07 bash[17480]: cluster 2026-03-09T14:37:23.457375+0000 mon.a (mon.0) 800 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:24 vm07 bash[17480]: cluster 2026-03-09T14:37:23.457470+0000 mon.a (mon.0) 801 : cluster [DBG] Standby manager daemon y started 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:24 vm07 bash[17480]: audit 2026-03-09T14:37:23.459571+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:24 vm07 bash[17480]: audit 2026-03-09T14:37:23.462245+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:24 vm07 bash[17480]: audit 2026-03-09T14:37:23.463220+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T14:37:24.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:24 vm07 bash[17480]: audit 2026-03-09T14:37:23.463799+0000 mon.c (mon.1) 66 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:37:24.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:24 vm11 bash[17885]: cluster 2026-03-09T14:37:23.457375+0000 mon.a (mon.0) 800 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T14:37:24.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:24 vm11 bash[17885]: cluster 2026-03-09T14:37:23.457470+0000 mon.a (mon.0) 801 : cluster [DBG] Standby manager daemon y started 2026-03-09T14:37:24.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:24 vm11 bash[17885]: audit 2026-03-09T14:37:23.459571+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T14:37:24.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:24 vm11 bash[17885]: audit 2026-03-09T14:37:23.462245+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:37:24.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:24 vm11 bash[17885]: audit 2026-03-09T14:37:23.463220+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T14:37:24.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:24 vm11 bash[17885]: audit 2026-03-09T14:37:23.463799+0000 mon.c (mon.1) 66 : audit [DBG] from='mgr.? 192.168.123.107:0/601586018' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:37:25.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:25 vm07 bash[22585]: cluster 2026-03-09T14:37:24.358496+0000 mon.a (mon.0) 802 : cluster [DBG] mgrmap e26: x(active, since 6s), standbys: y 2026-03-09T14:37:25.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:25 vm07 bash[22585]: cluster 2026-03-09T14:37:25.050597+0000 mgr.x (mgr.24889) 23 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:25.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:25 vm07 bash[17480]: cluster 2026-03-09T14:37:24.358496+0000 mon.a (mon.0) 802 : cluster [DBG] mgrmap e26: x(active, since 6s), standbys: y 2026-03-09T14:37:25.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:25 vm07 bash[17480]: cluster 2026-03-09T14:37:25.050597+0000 mgr.x (mgr.24889) 23 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:25.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:25 vm11 bash[17885]: cluster 2026-03-09T14:37:24.358496+0000 mon.a (mon.0) 802 : cluster [DBG] mgrmap e26: x(active, since 6s), standbys: y 2026-03-09T14:37:25.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:25 vm11 bash[17885]: cluster 2026-03-09T14:37:25.050597+0000 mgr.x (mgr.24889) 23 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:27.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:26 vm11 bash[17885]: audit 2026-03-09T14:37:25.943387+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:26 vm11 bash[17885]: audit 2026-03-09T14:37:25.956261+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:26 vm11 bash[17885]: audit 2026-03-09T14:37:26.279638+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:26 vm11 bash[17885]: audit 2026-03-09T14:37:26.294337+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:26 vm11 bash[17885]: audit 2026-03-09T14:37:26.575545+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:26 vm11 bash[17885]: audit 2026-03-09T14:37:26.584393+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:26 vm11 bash[17885]: audit 2026-03-09T14:37:26.587164+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:27.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:26 vm11 bash[17885]: audit 2026-03-09T14:37:26.587401+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:27.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:26 vm11 bash[17885]: audit 2026-03-09T14:37:26.869982+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:26 vm11 bash[17885]: audit 2026-03-09T14:37:26.878750+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:26 vm07 bash[22585]: audit 2026-03-09T14:37:25.943387+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:26 vm07 bash[22585]: audit 2026-03-09T14:37:25.956261+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:26 vm07 bash[22585]: audit 2026-03-09T14:37:26.279638+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:26 vm07 bash[22585]: audit 2026-03-09T14:37:26.294337+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:26 vm07 bash[22585]: audit 2026-03-09T14:37:26.575545+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:26 vm07 bash[22585]: audit 2026-03-09T14:37:26.584393+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:26 vm07 bash[22585]: audit 2026-03-09T14:37:26.587164+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:26 vm07 bash[22585]: audit 2026-03-09T14:37:26.587401+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:26 vm07 bash[22585]: audit 2026-03-09T14:37:26.869982+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:26 vm07 bash[22585]: audit 2026-03-09T14:37:26.878750+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:26 vm07 bash[17480]: audit 2026-03-09T14:37:25.943387+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:26 vm07 bash[17480]: audit 2026-03-09T14:37:25.956261+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:26 vm07 bash[17480]: audit 2026-03-09T14:37:26.279638+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:26 vm07 bash[17480]: audit 2026-03-09T14:37:26.294337+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:26 vm07 bash[17480]: audit 2026-03-09T14:37:26.575545+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:26 vm07 bash[17480]: audit 2026-03-09T14:37:26.584393+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:26 vm07 bash[17480]: audit 2026-03-09T14:37:26.587164+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:26 vm07 bash[17480]: audit 2026-03-09T14:37:26.587401+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:26 vm07 bash[17480]: audit 2026-03-09T14:37:26.869982+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:27.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:26 vm07 bash[17480]: audit 2026-03-09T14:37:26.878750+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:27 vm11 bash[17885]: cluster 2026-03-09T14:37:27.051221+0000 mgr.x (mgr.24889) 24 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:37:28.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:27 vm07 bash[22585]: cluster 2026-03-09T14:37:27.051221+0000 mgr.x (mgr.24889) 24 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:37:28.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:27 vm07 bash[17480]: cluster 2026-03-09T14:37:27.051221+0000 mgr.x (mgr.24889) 24 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:37:29.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:28 vm11 bash[17885]: audit 2026-03-09T14:37:27.495487+0000 mgr.x (mgr.24889) 25 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:29.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:28 vm07 bash[22585]: audit 2026-03-09T14:37:27.495487+0000 mgr.x (mgr.24889) 25 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:29.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:28 vm07 bash[17480]: audit 2026-03-09T14:37:27.495487+0000 mgr.x (mgr.24889) 25 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:30.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:29 vm11 bash[17885]: cluster 2026-03-09T14:37:29.051560+0000 mgr.x (mgr.24889) 26 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:37:30.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:29 vm07 bash[22585]: cluster 2026-03-09T14:37:29.051560+0000 mgr.x (mgr.24889) 26 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:37:30.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:29 vm07 bash[17480]: cluster 2026-03-09T14:37:29.051560+0000 mgr.x (mgr.24889) 26 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:37:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:31 vm11 bash[17885]: cluster 2026-03-09T14:37:31.052068+0000 mgr.x (mgr.24889) 27 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:37:32.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:31 vm07 bash[22585]: cluster 2026-03-09T14:37:31.052068+0000 mgr.x (mgr.24889) 27 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:37:32.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:31 vm07 bash[17480]: cluster 2026-03-09T14:37:31.052068+0000 mgr.x (mgr.24889) 27 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:37:32.908 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:32 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:37:32] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:37:33.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:33 vm07 bash[22585]: cluster 2026-03-09T14:37:33.052461+0000 mgr.x (mgr.24889) 28 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:37:33.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:33 vm07 bash[17480]: cluster 2026-03-09T14:37:33.052461+0000 mgr.x (mgr.24889) 28 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:37:33.431 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:33 vm11 bash[17885]: cluster 2026-03-09T14:37:33.052461+0000 mgr.x (mgr.24889) 28 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:37:33.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:33 vm11 bash[37598]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:37:33] "GET /metrics HTTP/1.1" 200 34971 "" "Prometheus/2.33.4" 2026-03-09T14:37:33.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:33 vm07 bash[42609]: level=error ts=2026-03-09T14:37:33.527Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:37:33.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:33 vm07 bash[42609]: level=warn ts=2026-03-09T14:37:33.532Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-09T14:37:33.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:33 vm07 bash[42609]: level=warn ts=2026-03-09T14:37:33.533Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.111:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.111 because it doesn't contain any IP SANs" 2026-03-09T14:37:34.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.501564+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.508657+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: cephadm 2026-03-09T14:37:33.509942+0000 mgr.x (mgr.24889) 29 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:37:34.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: cephadm 2026-03-09T14:37:33.510101+0000 mgr.x (mgr.24889) 30 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:37:34.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.510360+0000 mon.c (mon.1) 68 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:34.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.510659+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.511511+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.512006+0000 mon.c (mon.1) 70 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: cephadm 2026-03-09T14:37:33.546745+0000 mgr.x (mgr.24889) 31 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: cephadm 2026-03-09T14:37:33.546870+0000 mgr.x (mgr.24889) 32 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: cephadm 2026-03-09T14:37:33.587921+0000 mgr.x (mgr.24889) 33 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: cephadm 2026-03-09T14:37:33.588065+0000 mgr.x (mgr.24889) 34 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: cephadm 2026-03-09T14:37:33.621333+0000 mgr.x (mgr.24889) 35 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: cephadm 2026-03-09T14:37:33.623951+0000 mgr.x (mgr.24889) 36 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.663340+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.675649+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.681428+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.688709+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.695330+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.711203+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.717895+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.725510+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: audit 2026-03-09T14:37:33.731726+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: cephadm 2026-03-09T14:37:33.732426+0000 mgr.x (mgr.24889) 37 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T14:37:34.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:34 vm11 bash[17885]: cephadm 2026-03-09T14:37:33.739140+0000 mgr.x (mgr.24889) 38 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.501564+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.508657+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: cephadm 2026-03-09T14:37:33.509942+0000 mgr.x (mgr.24889) 29 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: cephadm 2026-03-09T14:37:33.510101+0000 mgr.x (mgr.24889) 30 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.510360+0000 mon.c (mon.1) 68 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.510659+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.511511+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.512006+0000 mon.c (mon.1) 70 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: cephadm 2026-03-09T14:37:33.546745+0000 mgr.x (mgr.24889) 31 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: cephadm 2026-03-09T14:37:33.546870+0000 mgr.x (mgr.24889) 32 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: cephadm 2026-03-09T14:37:33.587921+0000 mgr.x (mgr.24889) 33 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: cephadm 2026-03-09T14:37:33.588065+0000 mgr.x (mgr.24889) 34 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: cephadm 2026-03-09T14:37:33.621333+0000 mgr.x (mgr.24889) 35 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: cephadm 2026-03-09T14:37:33.623951+0000 mgr.x (mgr.24889) 36 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.663340+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.675649+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.681428+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.688709+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.695330+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.711203+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.717895+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.725510+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: audit 2026-03-09T14:37:33.731726+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: cephadm 2026-03-09T14:37:33.732426+0000 mgr.x (mgr.24889) 37 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:34 vm07 bash[22585]: cephadm 2026-03-09T14:37:33.739140+0000 mgr.x (mgr.24889) 38 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.501564+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.508657+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: cephadm 2026-03-09T14:37:33.509942+0000 mgr.x (mgr.24889) 29 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: cephadm 2026-03-09T14:37:33.510101+0000 mgr.x (mgr.24889) 30 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.510360+0000 mon.c (mon.1) 68 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.510659+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.511511+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.512006+0000 mon.c (mon.1) 70 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: cephadm 2026-03-09T14:37:33.546745+0000 mgr.x (mgr.24889) 31 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: cephadm 2026-03-09T14:37:33.546870+0000 mgr.x (mgr.24889) 32 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: cephadm 2026-03-09T14:37:33.587921+0000 mgr.x (mgr.24889) 33 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:37:34.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: cephadm 2026-03-09T14:37:33.588065+0000 mgr.x (mgr.24889) 34 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: cephadm 2026-03-09T14:37:33.621333+0000 mgr.x (mgr.24889) 35 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: cephadm 2026-03-09T14:37:33.623951+0000 mgr.x (mgr.24889) 36 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.663340+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.675649+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.681428+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.688709+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.695330+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.711203+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.717895+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.725510+0000 mon.a (mon.0) 822 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: audit 2026-03-09T14:37:33.731726+0000 mon.a (mon.0) 823 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: cephadm 2026-03-09T14:37:33.732426+0000 mgr.x (mgr.24889) 37 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T14:37:34.909 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:34 vm07 bash[17480]: cephadm 2026-03-09T14:37:33.739140+0000 mgr.x (mgr.24889) 38 : cephadm [INF] Deploying daemon alertmanager.a on vm07 2026-03-09T14:37:35.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:35 vm07 bash[22585]: audit 2026-03-09T14:37:34.590656+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:37:35.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:35 vm07 bash[22585]: cluster 2026-03-09T14:37:35.052800+0000 mgr.x (mgr.24889) 39 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:37:35.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:35 vm07 bash[17480]: audit 2026-03-09T14:37:34.590656+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:37:35.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:35 vm07 bash[17480]: cluster 2026-03-09T14:37:35.052800+0000 mgr.x (mgr.24889) 39 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:37:36.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:35 vm11 bash[17885]: audit 2026-03-09T14:37:34.590656+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:37:36.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:35 vm11 bash[17885]: cluster 2026-03-09T14:37:35.052800+0000 mgr.x (mgr.24889) 39 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:37:37.387 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:37 vm07 bash[22585]: cluster 2026-03-09T14:37:37.053350+0000 mgr.x (mgr.24889) 40 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:37:37.387 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:37 vm07 bash[17480]: cluster 2026-03-09T14:37:37.053350+0000 mgr.x (mgr.24889) 40 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:37:37.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:37 vm11 bash[17885]: cluster 2026-03-09T14:37:37.053350+0000 mgr.x (mgr.24889) 40 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:37:37.907 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:37:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:37.907 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:37:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:37.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:37.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:37.908 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:37:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:37.908 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:37:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:37.908 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:37.908 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:37.908 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:37 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:38.203 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:38 vm07 bash[22585]: audit 2026-03-09T14:37:37.505765+0000 mgr.x (mgr.24889) 41 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:38.203 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:37 vm07 systemd[1]: Stopping Ceph alertmanager.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:37:38.203 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:37 vm07 bash[42609]: level=info ts=2026-03-09T14:37:37.991Z caller=main.go:557 msg="Received SIGTERM, exiting gracefully..." 2026-03-09T14:37:38.203 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 bash[50945]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-alertmanager-a 2026-03-09T14:37:38.203 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@alertmanager.a.service: Deactivated successfully. 2026-03-09T14:37:38.203 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: Stopped Ceph alertmanager.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:38.203 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:38 vm07 bash[17480]: audit 2026-03-09T14:37:37.505765+0000 mgr.x (mgr.24889) 41 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:38.453 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:38.453 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:38.453 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:38.453 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:38.453 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:38.454 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: Started Ceph alertmanager.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:38.454 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:38.454 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:38.454 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:38.454 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:38 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:38.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:38 vm11 bash[17885]: audit 2026-03-09T14:37:37.505765+0000 mgr.x (mgr.24889) 41 : audit [DBG] from='client.24643 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:38.706 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 bash[51060]: ts=2026-03-09T14:37:38.459Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T14:37:38.706 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 bash[51060]: ts=2026-03-09T14:37:38.459Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T14:37:38.706 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 bash[51060]: ts=2026-03-09T14:37:38.462Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.107 port=9094 2026-03-09T14:37:38.706 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 bash[51060]: ts=2026-03-09T14:37:38.463Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T14:37:38.706 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 bash[51060]: ts=2026-03-09T14:37:38.480Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T14:37:38.707 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 bash[51060]: ts=2026-03-09T14:37:38.480Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T14:37:38.707 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 bash[51060]: ts=2026-03-09T14:37:38.482Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T14:37:38.707 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:38 vm07 bash[51060]: ts=2026-03-09T14:37:38.482Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: audit 2026-03-09T14:37:38.314083+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: audit 2026-03-09T14:37:38.323526+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: cephadm 2026-03-09T14:37:38.324409+0000 mgr.x (mgr.24889) 42 : cephadm [INF] Reconfiguring iscsi.foo.vm07.ohlmos (dependencies changed)... 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: audit 2026-03-09T14:37:38.327555+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: audit 2026-03-09T14:37:38.328296+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: cephadm 2026-03-09T14:37:38.329339+0000 mgr.x (mgr.24889) 43 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: audit 2026-03-09T14:37:38.331302+0000 mon.c (mon.1) 73 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: audit 2026-03-09T14:37:38.880722+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: cephadm 2026-03-09T14:37:38.887280+0000 mgr.x (mgr.24889) 44 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: cephadm 2026-03-09T14:37:38.887577+0000 mgr.x (mgr.24889) 45 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: audit 2026-03-09T14:37:38.888405+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[17480]: cluster 2026-03-09T14:37:39.053621+0000 mgr.x (mgr.24889) 46 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.497 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.498 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.498 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.498 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.498 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: Stopping Ceph node-exporter.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:37:39.498 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: audit 2026-03-09T14:37:38.314083+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: audit 2026-03-09T14:37:38.323526+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: cephadm 2026-03-09T14:37:38.324409+0000 mgr.x (mgr.24889) 42 : cephadm [INF] Reconfiguring iscsi.foo.vm07.ohlmos (dependencies changed)... 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: audit 2026-03-09T14:37:38.327555+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: audit 2026-03-09T14:37:38.328296+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: cephadm 2026-03-09T14:37:38.329339+0000 mgr.x (mgr.24889) 43 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: audit 2026-03-09T14:37:38.331302+0000 mon.c (mon.1) 73 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: audit 2026-03-09T14:37:38.880722+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: cephadm 2026-03-09T14:37:38.887280+0000 mgr.x (mgr.24889) 44 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: cephadm 2026-03-09T14:37:38.887577+0000 mgr.x (mgr.24889) 45 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: audit 2026-03-09T14:37:38.888405+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 bash[22585]: cluster 2026-03-09T14:37:39.053621+0000 mgr.x (mgr.24889) 46 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:39.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.499 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.499 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: audit 2026-03-09T14:37:38.314083+0000 mon.a (mon.0) 824 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: audit 2026-03-09T14:37:38.323526+0000 mon.a (mon.0) 825 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: cephadm 2026-03-09T14:37:38.324409+0000 mgr.x (mgr.24889) 42 : cephadm [INF] Reconfiguring iscsi.foo.vm07.ohlmos (dependencies changed)... 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: audit 2026-03-09T14:37:38.327555+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: audit 2026-03-09T14:37:38.328296+0000 mon.a (mon.0) 826 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: cephadm 2026-03-09T14:37:38.329339+0000 mgr.x (mgr.24889) 43 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: audit 2026-03-09T14:37:38.331302+0000 mon.c (mon.1) 73 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: audit 2026-03-09T14:37:38.880722+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: cephadm 2026-03-09T14:37:38.887280+0000 mgr.x (mgr.24889) 44 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: cephadm 2026-03-09T14:37:38.887577+0000 mgr.x (mgr.24889) 45 : cephadm [INF] Deploying daemon node-exporter.a on vm07 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: audit 2026-03-09T14:37:38.888405+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:39.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:39 vm11 bash[17885]: cluster 2026-03-09T14:37:39.053621+0000 mgr.x (mgr.24889) 46 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:39.802 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.802 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.802 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.802 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.802 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.802 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.803 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.803 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[51545]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-node-exporter-a 2026-03-09T14:37:39.803 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-09T14:37:39.803 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-09T14:37:39.803 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: Stopped Ceph node-exporter.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:39.803 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:39.803 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:40.157 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:39 vm07 systemd[1]: Started Ceph node-exporter.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:40.158 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:39 vm07 bash[51682]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-09T14:37:40.448 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: Stopping Ceph grafana.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:37:40.448 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39363]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-grafana.a 2026-03-09T14:37:40.448 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[33410]: t=2026-03-09T14:37:40+0000 lvl=info msg="Shutdown started" logger=server reason="System signal: terminated" 2026-03-09T14:37:40.448 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[33410]: t=2026-03-09T14:37:40+0000 lvl=info msg="Database locked, sleeping then retrying" logger=sqlstore error="database is locked" retry=0 2026-03-09T14:37:40.448 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: audit 2026-03-09T14:37:39.528231+0000 mon.c (mon.1) 74 : audit [DBG] from='client.? 192.168.123.107:0/800429883' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:37:40.448 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: audit 2026-03-09T14:37:39.797512+0000 mon.c (mon.1) 75 : audit [INF] from='client.? 192.168.123.107:0/3790224320' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/4000554118"}]: dispatch 2026-03-09T14:37:40.448 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: audit 2026-03-09T14:37:39.797951+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/4000554118"}]: dispatch 2026-03-09T14:37:40.448 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: audit 2026-03-09T14:37:39.837543+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.448 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: audit 2026-03-09T14:37:39.845108+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.448 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: cephadm 2026-03-09T14:37:39.846173+0000 mgr.x (mgr.24889) 47 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-09T14:37:40.448 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: cephadm 2026-03-09T14:37:39.851502+0000 mgr.x (mgr.24889) 48 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T14:37:40.448 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: audit 2026-03-09T14:37:39.897610+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.448 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: audit 2026-03-09T14:37:39.904190+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.448 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: audit 2026-03-09T14:37:39.904424+0000 mgr.x (mgr.24889) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:37:40.449 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: cephadm 2026-03-09T14:37:39.906205+0000 mgr.x (mgr.24889) 50 : cephadm [INF] Reconfiguring daemon grafana.a on vm11 2026-03-09T14:37:40.449 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 bash[17885]: audit 2026-03-09T14:37:39.906963+0000 mon.c (mon.1) 76 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:37:40.657 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[51060]: ts=2026-03-09T14:37:40.464Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000500045s 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: audit 2026-03-09T14:37:39.528231+0000 mon.c (mon.1) 74 : audit [DBG] from='client.? 192.168.123.107:0/800429883' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: audit 2026-03-09T14:37:39.797512+0000 mon.c (mon.1) 75 : audit [INF] from='client.? 192.168.123.107:0/3790224320' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/4000554118"}]: dispatch 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: audit 2026-03-09T14:37:39.797951+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/4000554118"}]: dispatch 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: audit 2026-03-09T14:37:39.837543+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: audit 2026-03-09T14:37:39.845108+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: cephadm 2026-03-09T14:37:39.846173+0000 mgr.x (mgr.24889) 47 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: cephadm 2026-03-09T14:37:39.851502+0000 mgr.x (mgr.24889) 48 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: audit 2026-03-09T14:37:39.897610+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: audit 2026-03-09T14:37:39.904190+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: audit 2026-03-09T14:37:39.904424+0000 mgr.x (mgr.24889) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: cephadm 2026-03-09T14:37:39.906205+0000 mgr.x (mgr.24889) 50 : cephadm [INF] Reconfiguring daemon grafana.a on vm11 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:40 vm07 bash[17480]: audit 2026-03-09T14:37:39.906963+0000 mon.c (mon.1) 76 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: audit 2026-03-09T14:37:39.528231+0000 mon.c (mon.1) 74 : audit [DBG] from='client.? 192.168.123.107:0/800429883' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: audit 2026-03-09T14:37:39.797512+0000 mon.c (mon.1) 75 : audit [INF] from='client.? 192.168.123.107:0/3790224320' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/4000554118"}]: dispatch 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: audit 2026-03-09T14:37:39.797951+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/4000554118"}]: dispatch 2026-03-09T14:37:40.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: audit 2026-03-09T14:37:39.837543+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: audit 2026-03-09T14:37:39.845108+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: cephadm 2026-03-09T14:37:39.846173+0000 mgr.x (mgr.24889) 47 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-09T14:37:40.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: cephadm 2026-03-09T14:37:39.851502+0000 mgr.x (mgr.24889) 48 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T14:37:40.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: audit 2026-03-09T14:37:39.897610+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: audit 2026-03-09T14:37:39.904190+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:40.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: audit 2026-03-09T14:37:39.904424+0000 mgr.x (mgr.24889) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:37:40.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: cephadm 2026-03-09T14:37:39.906205+0000 mgr.x (mgr.24889) 50 : cephadm [INF] Reconfiguring daemon grafana.a on vm11 2026-03-09T14:37:40.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:40 vm07 bash[22585]: audit 2026-03-09T14:37:39.906963+0000 mon.c (mon.1) 76 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:37:40.706 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39371]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-grafana-a 2026-03-09T14:37:40.706 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39404]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-grafana.a 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@grafana.a.service: Deactivated successfully. 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: Stopped Ceph grafana.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: Started Ceph grafana.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="The state of unified alerting is still not defined. The decision will be made during as we run the database migrations" logger=settings 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=warn msg="falling back to legacy setting of 'min_interval_seconds'; please use the configuration option in the `unified_alerting` section if Grafana 8 alerts are enabled." logger=settings 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="App mode production" logger=settings 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=warn msg="SQLite database file has broader permissions than it should" logger=sqlstore path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Starting DB migrations" logger=migrator 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="migrations completed" logger=migrator performed=0 skipped=377 duration=474.612µs 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Created default organization" logger=sqlstore 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Initialising plugins" logger=plugin.manager 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=input 2026-03-09T14:37:40.707 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=grafana-piechart-panel 2026-03-09T14:37:40.959 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=vonage-status-panel 2026-03-09T14:37:40.959 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="Live Push Gateway initialization" logger=live.push_http 2026-03-09T14:37:40.959 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="deleted datasource based on configuration" logger=provisioning.datasources name=Dashboard1 2026-03-09T14:37:40.959 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T14:37:40.959 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Loki uid=P8E80F9AEF21F6940 2026-03-09T14:37:40.959 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T14:37:40.959 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="warming cache for startup" logger=ngalert 2026-03-09T14:37:40.959 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 bash[39430]: t=2026-03-09T14:37:40+0000 lvl=info msg="starting MultiOrg Alertmanager" logger=ngalert.multiorg.alertmanager 2026-03-09T14:37:41.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.254 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.254 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.254 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.255 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.255 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: Stopping Ceph node-exporter.b for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:37:41.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:41 vm11 bash[39548]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-node-exporter-b 2026-03-09T14:37:41.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-09T14:37:41.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-09T14:37:41.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: Stopped Ceph node-exporter.b for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:41.255 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[17480]: audit 2026-03-09T14:37:40.330565+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/4000554118"}]': finished 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[17480]: cluster 2026-03-09T14:37:40.332549+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[17480]: audit 2026-03-09T14:37:40.531276+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[17480]: audit 2026-03-09T14:37:40.540433+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[17480]: cephadm 2026-03-09T14:37:40.541678+0000 mgr.x (mgr.24889) 51 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[17480]: cephadm 2026-03-09T14:37:40.541945+0000 mgr.x (mgr.24889) 52 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[17480]: audit 2026-03-09T14:37:40.555365+0000 mon.a (mon.0) 838 : audit [INF] from='client.? 192.168.123.107:0/3626361538' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/13006971"}]: dispatch 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[17480]: cluster 2026-03-09T14:37:41.054069+0000 mgr.x (mgr.24889) 53 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:41 vm07 bash[22585]: audit 2026-03-09T14:37:40.330565+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/4000554118"}]': finished 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:41 vm07 bash[22585]: cluster 2026-03-09T14:37:40.332549+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:41 vm07 bash[22585]: audit 2026-03-09T14:37:40.531276+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:41 vm07 bash[22585]: audit 2026-03-09T14:37:40.540433+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:41 vm07 bash[22585]: cephadm 2026-03-09T14:37:40.541678+0000 mgr.x (mgr.24889) 51 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:41 vm07 bash[22585]: cephadm 2026-03-09T14:37:40.541945+0000 mgr.x (mgr.24889) 52 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:41 vm07 bash[22585]: audit 2026-03-09T14:37:40.555365+0000 mon.a (mon.0) 838 : audit [INF] from='client.? 192.168.123.107:0/3626361538' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/13006971"}]: dispatch 2026-03-09T14:37:41.553 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:41 vm07 bash[22585]: cluster 2026-03-09T14:37:41.054069+0000 mgr.x (mgr.24889) 53 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:41.553 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[51682]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-09T14:37:41.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:41 vm11 bash[17885]: audit 2026-03-09T14:37:40.330565+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/4000554118"}]': finished 2026-03-09T14:37:41.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:41 vm11 bash[17885]: cluster 2026-03-09T14:37:40.332549+0000 mon.a (mon.0) 835 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-09T14:37:41.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:41 vm11 bash[17885]: audit 2026-03-09T14:37:40.531276+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:41.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:41 vm11 bash[17885]: audit 2026-03-09T14:37:40.540433+0000 mon.a (mon.0) 837 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:41.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:41 vm11 bash[17885]: cephadm 2026-03-09T14:37:40.541678+0000 mgr.x (mgr.24889) 51 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-09T14:37:41.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:41 vm11 bash[17885]: cephadm 2026-03-09T14:37:40.541945+0000 mgr.x (mgr.24889) 52 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-09T14:37:41.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:41 vm11 bash[17885]: audit 2026-03-09T14:37:40.555365+0000 mon.a (mon.0) 838 : audit [INF] from='client.? 192.168.123.107:0/3626361538' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/13006971"}]: dispatch 2026-03-09T14:37:41.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:41 vm11 bash[17885]: cluster 2026-03-09T14:37:41.054069+0000 mgr.x (mgr.24889) 53 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:41.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.754 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.755 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.755 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.755 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.755 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.755 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.755 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: Started Ceph node-exporter.b for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:41.755 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:41 vm11 bash[39660]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-09T14:37:41.755 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:41 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:41.907 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[51682]: 2abcce694348: Pulling fs layer 2026-03-09T14:37:41.907 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[51682]: 455fd88e5221: Pulling fs layer 2026-03-09T14:37:41.907 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:41 vm07 bash[51682]: 324153f2810a: Pulling fs layer 2026-03-09T14:37:42.403 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: 455fd88e5221: Download complete 2026-03-09T14:37:42.403 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: 2abcce694348: Verifying Checksum 2026-03-09T14:37:42.403 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: 2abcce694348: Download complete 2026-03-09T14:37:42.403 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: 2abcce694348: Pull complete 2026-03-09T14:37:42.403 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: 324153f2810a: Verifying Checksum 2026-03-09T14:37:42.403 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: 324153f2810a: Download complete 2026-03-09T14:37:42.403 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: 455fd88e5221: Pull complete 2026-03-09T14:37:42.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:42 vm07 bash[22585]: audit 2026-03-09T14:37:41.436458+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:42.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:42 vm07 bash[22585]: cephadm 2026-03-09T14:37:41.443165+0000 mgr.x (mgr.24889) 54 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:37:42.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:42 vm07 bash[22585]: audit 2026-03-09T14:37:41.443503+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:42 vm07 bash[22585]: audit 2026-03-09T14:37:41.545877+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.107:0/3626361538' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/13006971"}]': finished 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:42 vm07 bash[22585]: cluster 2026-03-09T14:37:41.545920+0000 mon.a (mon.0) 842 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:42 vm07 bash[22585]: cephadm 2026-03-09T14:37:41.603315+0000 mgr.x (mgr.24889) 55 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:42 vm07 bash[22585]: audit 2026-03-09T14:37:41.763728+0000 mon.c (mon.1) 77 : audit [INF] from='client.? 192.168.123.107:0/2093323636' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1768733302"}]: dispatch 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:42 vm07 bash[22585]: audit 2026-03-09T14:37:41.764198+0000 mon.a (mon.0) 843 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1768733302"}]: dispatch 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[17480]: audit 2026-03-09T14:37:41.436458+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[17480]: cephadm 2026-03-09T14:37:41.443165+0000 mgr.x (mgr.24889) 54 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[17480]: audit 2026-03-09T14:37:41.443503+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[17480]: audit 2026-03-09T14:37:41.545877+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.107:0/3626361538' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/13006971"}]': finished 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[17480]: cluster 2026-03-09T14:37:41.545920+0000 mon.a (mon.0) 842 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[17480]: cephadm 2026-03-09T14:37:41.603315+0000 mgr.x (mgr.24889) 55 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[17480]: audit 2026-03-09T14:37:41.763728+0000 mon.c (mon.1) 77 : audit [INF] from='client.? 192.168.123.107:0/2093323636' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1768733302"}]: dispatch 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[17480]: audit 2026-03-09T14:37:41.764198+0000 mon.a (mon.0) 843 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1768733302"}]: dispatch 2026-03-09T14:37:42.658 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:42 vm07 bash[17785]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:37:42] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: 324153f2810a: Pull complete 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.563Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.563Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.563Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.563Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T14:37:42.658 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T14:37:42.659 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:42 vm07 bash[51682]: ts=2026-03-09T14:37:42.564Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T14:37:42.754 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:42 vm11 bash[33974]: ts=2026-03-09T14:37:42.313Z caller=manager.go:609 level=warn component="rule manager" group=pools msg="Evaluating rule failed" rule="alert: CephPoolGrowthWarning\nexpr: (predict_linear(ceph_pool_percent_used[2d], 3600 * 24 * 5) * on(pool_id) group_right()\n ceph_pool_metadata) >= 95\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.9.2\n severity: warning\n type: ceph_default\nannotations:\n description: |\n Pool '{{ $labels.name }}' will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.\n summary: Pool growth rate may soon exceed it's capacity\n" err="found duplicate series for the match group {pool_id=\"1\"} on the left hand-side of the operation: [{instance=\"192.168.123.111:9283\", job=\"ceph\", pool_id=\"1\"}, {instance=\"192.168.123.107:9283\", job=\"ceph\", pool_id=\"1\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:37:42.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:42 vm11 bash[17885]: audit 2026-03-09T14:37:41.436458+0000 mon.a (mon.0) 839 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:42.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:42 vm11 bash[17885]: cephadm 2026-03-09T14:37:41.443165+0000 mgr.x (mgr.24889) 54 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:37:42.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:42 vm11 bash[17885]: audit 2026-03-09T14:37:41.443503+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:42.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:42 vm11 bash[17885]: audit 2026-03-09T14:37:41.545877+0000 mon.a (mon.0) 841 : audit [INF] from='client.? 192.168.123.107:0/3626361538' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/13006971"}]': finished 2026-03-09T14:37:42.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:42 vm11 bash[17885]: cluster 2026-03-09T14:37:41.545920+0000 mon.a (mon.0) 842 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-09T14:37:42.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:42 vm11 bash[17885]: cephadm 2026-03-09T14:37:41.603315+0000 mgr.x (mgr.24889) 55 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-09T14:37:42.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:42 vm11 bash[17885]: audit 2026-03-09T14:37:41.763728+0000 mon.c (mon.1) 77 : audit [INF] from='client.? 192.168.123.107:0/2093323636' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1768733302"}]: dispatch 2026-03-09T14:37:42.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:42 vm11 bash[17885]: audit 2026-03-09T14:37:41.764198+0000 mon.a (mon.0) 843 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1768733302"}]: dispatch 2026-03-09T14:37:43.195 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:42 vm11 bash[39660]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-09T14:37:43.504 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[39660]: 2abcce694348: Pulling fs layer 2026-03-09T14:37:43.504 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[39660]: 455fd88e5221: Pulling fs layer 2026-03-09T14:37:43.504 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[39660]: 324153f2810a: Pulling fs layer 2026-03-09T14:37:43.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:43 vm11 bash[37598]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:37:43] "GET /metrics HTTP/1.1" 200 37765 "" "Prometheus/2.33.4" 2026-03-09T14:37:43.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:43 vm07 bash[22585]: audit 2026-03-09T14:37:42.547575+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1768733302"}]': finished 2026-03-09T14:37:43.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:43 vm07 bash[22585]: cluster 2026-03-09T14:37:42.547653+0000 mon.a (mon.0) 845 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T14:37:43.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:43 vm07 bash[22585]: audit 2026-03-09T14:37:42.758225+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.107:0/3174971374' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1282393936"}]: dispatch 2026-03-09T14:37:43.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:43 vm07 bash[22585]: cluster 2026-03-09T14:37:43.054466+0000 mgr.x (mgr.24889) 56 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:43.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:43 vm07 bash[17480]: audit 2026-03-09T14:37:42.547575+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1768733302"}]': finished 2026-03-09T14:37:43.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:43 vm07 bash[17480]: cluster 2026-03-09T14:37:42.547653+0000 mon.a (mon.0) 845 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T14:37:43.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:43 vm07 bash[17480]: audit 2026-03-09T14:37:42.758225+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.107:0/3174971374' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1282393936"}]: dispatch 2026-03-09T14:37:43.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:43 vm07 bash[17480]: cluster 2026-03-09T14:37:43.054466+0000 mgr.x (mgr.24889) 56 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:43.917 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[39660]: 455fd88e5221: Verifying Checksum 2026-03-09T14:37:43.917 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[39660]: 455fd88e5221: Download complete 2026-03-09T14:37:43.917 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[39660]: 2abcce694348: Verifying Checksum 2026-03-09T14:37:43.918 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[39660]: 2abcce694348: Pull complete 2026-03-09T14:37:43.918 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[17885]: audit 2026-03-09T14:37:42.547575+0000 mon.a (mon.0) 844 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1768733302"}]': finished 2026-03-09T14:37:43.918 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[17885]: cluster 2026-03-09T14:37:42.547653+0000 mon.a (mon.0) 845 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-09T14:37:43.918 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[17885]: audit 2026-03-09T14:37:42.758225+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.107:0/3174971374' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1282393936"}]: dispatch 2026-03-09T14:37:43.918 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[17885]: cluster 2026-03-09T14:37:43.054466+0000 mgr.x (mgr.24889) 56 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:44.254 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[39660]: 324153f2810a: Verifying Checksum 2026-03-09T14:37:44.254 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[39660]: 324153f2810a: Download complete 2026-03-09T14:37:44.254 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:43 vm11 bash[39660]: 455fd88e5221: Pull complete 2026-03-09T14:37:44.254 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: 324153f2810a: Pull complete 2026-03-09T14:37:44.254 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-09T14:37:44.254 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T14:37:44.254 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.157Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T14:37:44.254 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.157Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T14:37:44.254 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.158Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.158Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.158Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.159Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.160Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T14:37:44.255 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[39660]: ts=2026-03-09T14:37:44.160Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T14:37:44.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:44 vm07 bash[22585]: audit 2026-03-09T14:37:43.563417+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.107:0/3174971374' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1282393936"}]': finished 2026-03-09T14:37:44.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:44 vm07 bash[22585]: cluster 2026-03-09T14:37:43.568760+0000 mon.a (mon.0) 848 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T14:37:44.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:44 vm07 bash[22585]: audit 2026-03-09T14:37:43.786300+0000 mon.a (mon.0) 849 : audit [INF] from='client.? 192.168.123.107:0/2634605423' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561618492"}]: dispatch 2026-03-09T14:37:44.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:44 vm07 bash[17480]: audit 2026-03-09T14:37:43.563417+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.107:0/3174971374' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1282393936"}]': finished 2026-03-09T14:37:44.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:44 vm07 bash[17480]: cluster 2026-03-09T14:37:43.568760+0000 mon.a (mon.0) 848 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T14:37:44.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:44 vm07 bash[17480]: audit 2026-03-09T14:37:43.786300+0000 mon.a (mon.0) 849 : audit [INF] from='client.? 192.168.123.107:0/2634605423' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561618492"}]: dispatch 2026-03-09T14:37:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[17885]: audit 2026-03-09T14:37:43.563417+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.107:0/3174971374' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1282393936"}]': finished 2026-03-09T14:37:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[17885]: cluster 2026-03-09T14:37:43.568760+0000 mon.a (mon.0) 848 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-09T14:37:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:44 vm11 bash[17885]: audit 2026-03-09T14:37:43.786300+0000 mon.a (mon.0) 849 : audit [INF] from='client.? 192.168.123.107:0/2634605423' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561618492"}]: dispatch 2026-03-09T14:37:45.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:45 vm07 bash[22585]: audit 2026-03-09T14:37:44.562901+0000 mon.a (mon.0) 850 : audit [INF] from='client.? 192.168.123.107:0/2634605423' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561618492"}]': finished 2026-03-09T14:37:45.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:45 vm07 bash[22585]: cluster 2026-03-09T14:37:44.562932+0000 mon.a (mon.0) 851 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T14:37:45.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:45 vm07 bash[22585]: audit 2026-03-09T14:37:44.796282+0000 mon.c (mon.1) 78 : audit [INF] from='client.? 192.168.123.107:0/1879029849' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/1071847988"}]: dispatch 2026-03-09T14:37:45.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:45 vm07 bash[22585]: audit 2026-03-09T14:37:44.796686+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/1071847988"}]: dispatch 2026-03-09T14:37:45.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:45 vm07 bash[22585]: cluster 2026-03-09T14:37:45.054822+0000 mgr.x (mgr.24889) 57 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:45.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:45 vm07 bash[17480]: audit 2026-03-09T14:37:44.562901+0000 mon.a (mon.0) 850 : audit [INF] from='client.? 192.168.123.107:0/2634605423' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561618492"}]': finished 2026-03-09T14:37:45.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:45 vm07 bash[17480]: cluster 2026-03-09T14:37:44.562932+0000 mon.a (mon.0) 851 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T14:37:45.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:45 vm07 bash[17480]: audit 2026-03-09T14:37:44.796282+0000 mon.c (mon.1) 78 : audit [INF] from='client.? 192.168.123.107:0/1879029849' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/1071847988"}]: dispatch 2026-03-09T14:37:45.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:45 vm07 bash[17480]: audit 2026-03-09T14:37:44.796686+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/1071847988"}]: dispatch 2026-03-09T14:37:45.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:45 vm07 bash[17480]: cluster 2026-03-09T14:37:45.054822+0000 mgr.x (mgr.24889) 57 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:46.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:45 vm11 bash[17885]: audit 2026-03-09T14:37:44.562901+0000 mon.a (mon.0) 850 : audit [INF] from='client.? 192.168.123.107:0/2634605423' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1561618492"}]': finished 2026-03-09T14:37:46.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:45 vm11 bash[17885]: cluster 2026-03-09T14:37:44.562932+0000 mon.a (mon.0) 851 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-09T14:37:46.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:45 vm11 bash[17885]: audit 2026-03-09T14:37:44.796282+0000 mon.c (mon.1) 78 : audit [INF] from='client.? 192.168.123.107:0/1879029849' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/1071847988"}]: dispatch 2026-03-09T14:37:46.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:45 vm11 bash[17885]: audit 2026-03-09T14:37:44.796686+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/1071847988"}]: dispatch 2026-03-09T14:37:46.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:45 vm11 bash[17885]: cluster 2026-03-09T14:37:45.054822+0000 mgr.x (mgr.24889) 57 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 98 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:37:46.871 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:46 vm11 bash[17885]: audit 2026-03-09T14:37:45.577915+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/1071847988"}]': finished 2026-03-09T14:37:46.871 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:46 vm11 bash[17885]: cluster 2026-03-09T14:37:45.577957+0000 mon.a (mon.0) 854 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T14:37:46.871 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:46 vm11 bash[17885]: audit 2026-03-09T14:37:45.798067+0000 mon.a (mon.0) 855 : audit [INF] from='client.? 192.168.123.107:0/286272274' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/1071847988"}]: dispatch 2026-03-09T14:37:46.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:46 vm07 bash[22585]: audit 2026-03-09T14:37:45.577915+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/1071847988"}]': finished 2026-03-09T14:37:46.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:46 vm07 bash[22585]: cluster 2026-03-09T14:37:45.577957+0000 mon.a (mon.0) 854 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T14:37:46.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:46 vm07 bash[22585]: audit 2026-03-09T14:37:45.798067+0000 mon.a (mon.0) 855 : audit [INF] from='client.? 192.168.123.107:0/286272274' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/1071847988"}]: dispatch 2026-03-09T14:37:46.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:46 vm07 bash[17480]: audit 2026-03-09T14:37:45.577915+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/1071847988"}]': finished 2026-03-09T14:37:46.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:46 vm07 bash[17480]: cluster 2026-03-09T14:37:45.577957+0000 mon.a (mon.0) 854 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-09T14:37:46.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:46 vm07 bash[17480]: audit 2026-03-09T14:37:45.798067+0000 mon.a (mon.0) 855 : audit [INF] from='client.? 192.168.123.107:0/286272274' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/1071847988"}]: dispatch 2026-03-09T14:37:47.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.504 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.504 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: Stopping Ceph prometheus.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.331Z caller=main.go:775 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.331Z caller=main.go:798 level=info msg="Stopping scrape discovery manager..." 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.331Z caller=main.go:812 level=info msg="Stopping notify discovery manager..." 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.331Z caller=main.go:834 level=info msg="Stopping scrape manager..." 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.331Z caller=main.go:794 level=info msg="Scrape discovery manager stopped" 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.331Z caller=main.go:808 level=info msg="Notify discovery manager stopped" 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.331Z caller=manager.go:945 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.331Z caller=manager.go:955 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.331Z caller=main.go:828 level=info msg="Scrape manager stopped" 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.332Z caller=notifier.go:600 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.332Z caller=main.go:1054 level=info msg="Notifier manager stopped" 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[33974]: ts=2026-03-09T14:37:47.332Z caller=main.go:1066 level=info msg="See you next time!" 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[39995]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-prometheus-a 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@prometheus.a.service: Deactivated successfully. 2026-03-09T14:37:47.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: Stopped Ceph prometheus.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:47.505 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.505 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.505 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.505 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.785 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.785 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:47 vm11 bash[37598]: [09/Mar/2026:14:37:47] ENGINE Bus STOPPING 2026-03-09T14:37:47.785 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.785 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.785 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:47 vm11 bash[17885]: audit 2026-03-09T14:37:46.584779+0000 mon.a (mon.0) 856 : audit [INF] from='client.? 192.168.123.107:0/286272274' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/1071847988"}]': finished 2026-03-09T14:37:47.785 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:47 vm11 bash[17885]: cluster 2026-03-09T14:37:46.584819+0000 mon.a (mon.0) 857 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T14:37:47.785 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:47 vm11 bash[17885]: cluster 2026-03-09T14:37:47.055221+0000 mgr.x (mgr.24889) 58 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:47.785 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.785 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: Started Ceph prometheus.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:47.785 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.784Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T14:37:47.785 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.785Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T14:37:47.785 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.785Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm11 (none))" 2026-03-09T14:37:47.785 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.785Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T14:37:47.785 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.786Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T14:37:47.785 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.785 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.785 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.785 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:37:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:47.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:47 vm07 bash[22585]: audit 2026-03-09T14:37:46.584779+0000 mon.a (mon.0) 856 : audit [INF] from='client.? 192.168.123.107:0/286272274' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/1071847988"}]': finished 2026-03-09T14:37:47.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:47 vm07 bash[22585]: cluster 2026-03-09T14:37:46.584819+0000 mon.a (mon.0) 857 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T14:37:47.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:47 vm07 bash[22585]: cluster 2026-03-09T14:37:47.055221+0000 mgr.x (mgr.24889) 58 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:47.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:47 vm07 bash[17480]: audit 2026-03-09T14:37:46.584779+0000 mon.a (mon.0) 856 : audit [INF] from='client.? 192.168.123.107:0/286272274' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/1071847988"}]': finished 2026-03-09T14:37:47.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:47 vm07 bash[17480]: cluster 2026-03-09T14:37:46.584819+0000 mon.a (mon.0) 857 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-09T14:37:47.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:47 vm07 bash[17480]: cluster 2026-03-09T14:37:47.055221+0000 mgr.x (mgr.24889) 58 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.788Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.789Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.789Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.789Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.791Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.791Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.132µs 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.791Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.802Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=2 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.815Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=2 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.816Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=2 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.816Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=134.292µs wal_replay_duration=24.792686ms wbl_replay_duration=120ns total_replay_duration=24.937729ms 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.820Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.820Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.820Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.830Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=9.756455ms db_storage=1.092µs remote_storage=1.073µs web_handler=762ns query_engine=1.093µs scrape=861.871µs scrape_sd=144.502µs notify=10.81µs notify_sd=12.514µs rules=8.401208ms tracing=8.556µs 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.830Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T14:37:48.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:47 vm11 bash[40106]: ts=2026-03-09T14:37:47.830Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T14:37:48.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:47 vm11 bash[37598]: [09/Mar/2026:14:37:47] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:37:48.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:47 vm11 bash[37598]: [09/Mar/2026:14:37:47] ENGINE Bus STOPPED 2026-03-09T14:37:48.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:47 vm11 bash[37598]: [09/Mar/2026:14:37:47] ENGINE Bus STARTING 2026-03-09T14:37:48.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:47 vm11 bash[37598]: [09/Mar/2026:14:37:47] ENGINE Serving on http://:::9283 2026-03-09T14:37:48.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:47 vm11 bash[37598]: [09/Mar/2026:14:37:47] ENGINE Bus STARTED 2026-03-09T14:37:48.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:47 vm11 bash[37598]: [09/Mar/2026:14:37:47] ENGINE Bus STOPPING 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE Bus STOPPED 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE Bus STARTING 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE Serving on http://:::9283 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE Bus STARTED 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE Bus STOPPING 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE Bus STOPPED 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE Bus STARTING 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE Serving on http://:::9283 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:48 vm11 bash[37598]: [09/Mar/2026:14:37:48] ENGINE Bus STARTED 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.663131+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.672441+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.677181+0000 mgr.x (mgr.24889) 59 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.679670+0000 mon.c (mon.1) 79 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.681715+0000 mgr.x (mgr.24889) 60 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.684302+0000 mon.c (mon.1) 80 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.693198+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.704450+0000 mgr.x (mgr.24889) 61 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.706589+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.715658+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: cephadm 2026-03-09T14:37:47.717208+0000 mgr.x (mgr.24889) 62 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.717851+0000 mgr.x (mgr.24889) 63 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.720364+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.720945+0000 mgr.x (mgr.24889) 64 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:37:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.723389+0000 mon.c (mon.1) 83 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.730363+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.732341+0000 mgr.x (mgr.24889) 65 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.733387+0000 mgr.x (mgr.24889) 66 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.734828+0000 mon.c (mon.1) 84 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.736093+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.742335+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.749766+0000 mgr.x (mgr.24889) 67 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.751374+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.752112+0000 mgr.x (mgr.24889) 68 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.753955+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.759600+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:47.812799+0000 mon.c (mon.1) 88 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: cephadm 2026-03-09T14:37:47.813206+0000 mgr.x (mgr.24889) 69 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: cephadm 2026-03-09T14:37:47.813588+0000 mgr.x (mgr.24889) 70 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:48.293991+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:48.297816+0000 mon.c (mon.1) 89 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:48.298143+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:48.298765+0000 mon.c (mon.1) 90 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:37:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:48 vm11 bash[17885]: audit 2026-03-09T14:37:48.299337+0000 mon.c (mon.1) 91 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:48.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.663131+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.777 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.672441+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.677181+0000 mgr.x (mgr.24889) 59 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.679670+0000 mon.c (mon.1) 79 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.681715+0000 mgr.x (mgr.24889) 60 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.684302+0000 mon.c (mon.1) 80 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.693198+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.704450+0000 mgr.x (mgr.24889) 61 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.706589+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.715658+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: cephadm 2026-03-09T14:37:47.717208+0000 mgr.x (mgr.24889) 62 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.717851+0000 mgr.x (mgr.24889) 63 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.720364+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.720945+0000 mgr.x (mgr.24889) 64 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.723389+0000 mon.c (mon.1) 83 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.730363+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.732341+0000 mgr.x (mgr.24889) 65 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.733387+0000 mgr.x (mgr.24889) 66 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.734828+0000 mon.c (mon.1) 84 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.736093+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.663131+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.672441+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.677181+0000 mgr.x (mgr.24889) 59 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.679670+0000 mon.c (mon.1) 79 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.681715+0000 mgr.x (mgr.24889) 60 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.684302+0000 mon.c (mon.1) 80 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm07.local:9093"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.693198+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.704450+0000 mgr.x (mgr.24889) 61 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.706589+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.715658+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: cephadm 2026-03-09T14:37:47.717208+0000 mgr.x (mgr.24889) 62 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.717851+0000 mgr.x (mgr.24889) 63 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.720364+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.720945+0000 mgr.x (mgr.24889) 64 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.723389+0000 mon.c (mon.1) 83 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.730363+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.732341+0000 mgr.x (mgr.24889) 65 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.733387+0000 mgr.x (mgr.24889) 66 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.734828+0000 mon.c (mon.1) 84 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.736093+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.742335+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.749766+0000 mgr.x (mgr.24889) 67 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.751374+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.752112+0000 mgr.x (mgr.24889) 68 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.753955+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.759600+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:47.812799+0000 mon.c (mon.1) 88 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: cephadm 2026-03-09T14:37:47.813206+0000 mgr.x (mgr.24889) 69 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: cephadm 2026-03-09T14:37:47.813588+0000 mgr.x (mgr.24889) 70 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:48.293991+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:48.297816+0000 mon.c (mon.1) 89 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:48.298143+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:48.298765+0000 mon.c (mon.1) 90 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:37:48.778 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 bash[22585]: audit 2026-03-09T14:37:48.299337+0000 mon.c (mon.1) 91 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:48.779 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[51060]: ts=2026-03-09T14:37:48.466Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002552281s 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.742335+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.749766+0000 mgr.x (mgr.24889) 67 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.751374+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.752112+0000 mgr.x (mgr.24889) 68 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.753955+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.759600+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:47.812799+0000 mon.c (mon.1) 88 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: cephadm 2026-03-09T14:37:47.813206+0000 mgr.x (mgr.24889) 69 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: cephadm 2026-03-09T14:37:47.813588+0000 mgr.x (mgr.24889) 70 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:48.293991+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:48.297816+0000 mon.c (mon.1) 89 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:48.298143+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:48.298765+0000 mon.c (mon.1) 90 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:37:48.779 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 bash[17480]: audit 2026-03-09T14:37:48.299337+0000 mon.c (mon.1) 91 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:37:49.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.155 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.155 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: Stopping Ceph mgr.y for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:37:49.155 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:48 vm07 bash[52102]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mgr-y 2026-03-09T14:37:49.155 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.y.service: Main process exited, code=exited, status=143/n/a 2026-03-09T14:37:49.155 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.y.service: Failed with result 'exit-code'. 2026-03-09T14:37:49.155 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: Stopped Ceph mgr.y for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:49.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.155 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.155 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.155 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.155 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.155 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:48 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.407 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.407 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.407 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.407 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: Started Ceph mgr.y for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:37:49.407 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.407 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.407 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.408 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:37:49 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:49 vm07 bash[52213]: debug 2026-03-09T14:37:49.479+0000 7fde53247140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:49 vm07 bash[52213]: debug 2026-03-09T14:37:49.519+0000 7fde53247140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:49 vm07 bash[52213]: debug 2026-03-09T14:37:49.651+0000 7fde53247140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:49 vm07 bash[22585]: cephadm 2026-03-09T14:37:48.285045+0000 mgr.x (mgr.24889) 71 : cephadm [INF] Upgrade: Updating mgr.y 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:49 vm07 bash[22585]: cephadm 2026-03-09T14:37:48.296952+0000 mgr.x (mgr.24889) 72 : cephadm [INF] Deploying daemon mgr.y on vm07 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:49 vm07 bash[22585]: cluster 2026-03-09T14:37:49.055546+0000 mgr.x (mgr.24889) 73 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 932 B/s rd, 0 op/s 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:49 vm07 bash[22585]: audit 2026-03-09T14:37:49.287998+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:49 vm07 bash[22585]: audit 2026-03-09T14:37:49.294313+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:49 vm07 bash[22585]: audit 2026-03-09T14:37:49.590971+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:49 vm07 bash[17480]: cephadm 2026-03-09T14:37:48.285045+0000 mgr.x (mgr.24889) 71 : cephadm [INF] Upgrade: Updating mgr.y 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:49 vm07 bash[17480]: cephadm 2026-03-09T14:37:48.296952+0000 mgr.x (mgr.24889) 72 : cephadm [INF] Deploying daemon mgr.y on vm07 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:49 vm07 bash[17480]: cluster 2026-03-09T14:37:49.055546+0000 mgr.x (mgr.24889) 73 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 932 B/s rd, 0 op/s 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:49 vm07 bash[17480]: audit 2026-03-09T14:37:49.287998+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:49 vm07 bash[17480]: audit 2026-03-09T14:37:49.294313+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:49.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:49 vm07 bash[17480]: audit 2026-03-09T14:37:49.590971+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:37:50.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:49 vm11 bash[17885]: cephadm 2026-03-09T14:37:48.285045+0000 mgr.x (mgr.24889) 71 : cephadm [INF] Upgrade: Updating mgr.y 2026-03-09T14:37:50.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:49 vm11 bash[17885]: cephadm 2026-03-09T14:37:48.296952+0000 mgr.x (mgr.24889) 72 : cephadm [INF] Deploying daemon mgr.y on vm07 2026-03-09T14:37:50.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:49 vm11 bash[17885]: cluster 2026-03-09T14:37:49.055546+0000 mgr.x (mgr.24889) 73 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 99 MiB used, 160 GiB / 160 GiB avail; 932 B/s rd, 0 op/s 2026-03-09T14:37:50.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:49 vm11 bash[17885]: audit 2026-03-09T14:37:49.287998+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:50.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:49 vm11 bash[17885]: audit 2026-03-09T14:37:49.294313+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:50.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:49 vm11 bash[17885]: audit 2026-03-09T14:37:49.590971+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:37:50.383 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:49 vm07 bash[52213]: debug 2026-03-09T14:37:49.943+0000 7fde53247140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:37:50.657 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: debug 2026-03-09T14:37:50.383+0000 7fde53247140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:37:50.657 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: debug 2026-03-09T14:37:50.467+0000 7fde53247140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:37:50.657 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:37:50.657 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:37:50.657 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: from numpy import show_config as show_numpy_config 2026-03-09T14:37:50.657 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: debug 2026-03-09T14:37:50.591+0000 7fde53247140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:37:51.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:50 vm11 bash[17885]: audit 2026-03-09T14:37:49.275210+0000 mgr.x (mgr.24889) 74 : audit [DBG] from='client.15045 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:51.157 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: debug 2026-03-09T14:37:50.739+0000 7fde53247140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:37:51.157 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: debug 2026-03-09T14:37:50.775+0000 7fde53247140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:37:51.157 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: debug 2026-03-09T14:37:50.815+0000 7fde53247140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:37:51.157 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: debug 2026-03-09T14:37:50.855+0000 7fde53247140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:37:51.157 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:50 vm07 bash[52213]: debug 2026-03-09T14:37:50.907+0000 7fde53247140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:37:51.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:50 vm07 bash[17480]: audit 2026-03-09T14:37:49.275210+0000 mgr.x (mgr.24889) 74 : audit [DBG] from='client.15045 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:51.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:50 vm07 bash[22585]: audit 2026-03-09T14:37:49.275210+0000 mgr.x (mgr.24889) 74 : audit [DBG] from='client.15045 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:37:51.615 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:51 vm07 bash[52213]: debug 2026-03-09T14:37:51.351+0000 7fde53247140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:37:51.615 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:51 vm07 bash[52213]: debug 2026-03-09T14:37:51.391+0000 7fde53247140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:37:51.615 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:51 vm07 bash[52213]: debug 2026-03-09T14:37:51.427+0000 7fde53247140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:37:51.615 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:51 vm07 bash[52213]: debug 2026-03-09T14:37:51.571+0000 7fde53247140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:37:51.865 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:37:51.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:51 vm07 bash[52213]: debug 2026-03-09T14:37:51.619+0000 7fde53247140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:37:51.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:51 vm07 bash[52213]: debug 2026-03-09T14:37:51.659+0000 7fde53247140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:37:51.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:51 vm07 bash[52213]: debug 2026-03-09T14:37:51.771+0000 7fde53247140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:37:52.214 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:51 vm07 bash[52213]: debug 2026-03-09T14:37:51.935+0000 7fde53247140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:37:52.214 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:52 vm07 bash[52213]: debug 2026-03-09T14:37:52.127+0000 7fde53247140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:37:52.214 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:52 vm07 bash[52213]: debug 2026-03-09T14:37:52.175+0000 7fde53247140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:37:52.214 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:52 vm07 bash[22585]: cluster 2026-03-09T14:37:51.056049+0000 mgr.x (mgr.24889) 75 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:37:52.214 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:52 vm07 bash[17480]: cluster 2026-03-09T14:37:51.056049+0000 mgr.x (mgr.24889) 75 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:37:52.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:52 vm11 bash[17885]: cluster 2026-03-09T14:37:51.056049+0000 mgr.x (mgr.24889) 75 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 starting - - - - 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 starting - - - - 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 starting - - - - 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283 running (42s) 26s ago 7m 511M - 19.2.3-678-ge911bdeb 654f31e6858e bc02e91cc35e 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 starting - - - - 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (8m) 26s ago 8m 53.0M 2048M 17.2.0 e1d6a67b021e 47602ca6fae7 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (7m) 26s ago 7m 42.4M 2048M 17.2.0 e1d6a67b021e eac3b7829b01 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (7m) 26s ago 7m 41.0M 2048M 17.2.0 e1d6a67b021e 9c901130627b 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 starting - - - - 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 starting - - - - 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (7m) 26s ago 7m 49.3M 4096M 17.2.0 e1d6a67b021e 7a4a11fbf70d 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (7m) 26s ago 7m 51.4M 4096M 17.2.0 e1d6a67b021e 15e2e23b506b 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (7m) 26s ago 7m 47.2M 4096M 17.2.0 e1d6a67b021e fe41cd2240dc 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (6m) 26s ago 6m 49.0M 4096M 17.2.0 e1d6a67b021e b07b01a0b5aa 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (6m) 26s ago 6m 50.0M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (6m) 26s ago 6m 47.4M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (6m) 26s ago 6m 47.2M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (5m) 26s ago 5m 49.0M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 starting - - - - 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (4m) 26s ago 4m 83.8M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (4m) 26s ago 4m 83.2M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (4m) 26s ago 4m 83.5M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:37:52.303 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (4m) 26s ago 4m 83.5M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 1, 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T14:37:52.546 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:37:52.547 INFO:teuthology.orchestra.run.vm07.stdout: "mds": {}, 2026-03-09T14:37:52.547 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:37:52.547 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:37:52.547 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:37:52.547 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:37:52.547 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 16, 2026-03-09T14:37:52.547 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-09T14:37:52.547 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:37:52.547 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:37:52.601 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:52 vm07 bash[52213]: debug 2026-03-09T14:37:52.215+0000 7fde53247140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:37:52.601 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:52 vm07 bash[52213]: debug 2026-03-09T14:37:52.375+0000 7fde53247140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:37:52.758 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:37:52.758 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:37:52.758 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:37:52.758 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:37:52.758 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T14:37:52.758 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "", 2026-03-09T14:37:52.758 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading mgr daemons", 2026-03-09T14:37:52.758 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:37:52.758 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:37:52.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:52 vm07 bash[52213]: debug 2026-03-09T14:37:52.603+0000 7fde53247140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:37:52.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:52 vm07 bash[52213]: [09/Mar/2026:14:37:52] ENGINE Bus STARTING 2026-03-09T14:37:52.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:52 vm07 bash[52213]: CherryPy Checker: 2026-03-09T14:37:52.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:52 vm07 bash[52213]: The Application mounted at '' has an empty config. 2026-03-09T14:37:52.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:52 vm07 bash[52213]: [09/Mar/2026:14:37:52] ENGINE Serving on http://:::9283 2026-03-09T14:37:52.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:37:52 vm07 bash[52213]: [09/Mar/2026:14:37:52] ENGINE Bus STARTED 2026-03-09T14:37:52.983 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:37:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:53 vm11 bash[17885]: audit 2026-03-09T14:37:51.851622+0000 mgr.x (mgr.24889) 76 : audit [DBG] from='client.15093 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:53 vm11 bash[17885]: audit 2026-03-09T14:37:52.552379+0000 mon.a (mon.0) 869 : audit [DBG] from='client.? 192.168.123.107:0/79944011' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:37:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:53 vm11 bash[17885]: cluster 2026-03-09T14:37:52.610120+0000 mon.a (mon.0) 870 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T14:37:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:53 vm11 bash[17885]: cluster 2026-03-09T14:37:52.610305+0000 mon.a (mon.0) 871 : cluster [DBG] Standby manager daemon y started 2026-03-09T14:37:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:53 vm11 bash[17885]: audit 2026-03-09T14:37:52.612110+0000 mon.c (mon.1) 93 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T14:37:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:53 vm11 bash[17885]: audit 2026-03-09T14:37:52.620357+0000 mon.c (mon.1) 94 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:37:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:53 vm11 bash[17885]: audit 2026-03-09T14:37:52.622433+0000 mon.c (mon.1) 95 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T14:37:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:53 vm11 bash[17885]: audit 2026-03-09T14:37:52.622978+0000 mon.c (mon.1) 96 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:37:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:53 vm11 bash[17885]: audit 2026-03-09T14:37:52.986718+0000 mon.b (mon.2) 113 : audit [DBG] from='client.? 192.168.123.107:0/2719485380' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:53 vm07 bash[17480]: audit 2026-03-09T14:37:51.851622+0000 mgr.x (mgr.24889) 76 : audit [DBG] from='client.15093 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:53 vm07 bash[17480]: audit 2026-03-09T14:37:52.552379+0000 mon.a (mon.0) 869 : audit [DBG] from='client.? 192.168.123.107:0/79944011' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:53 vm07 bash[17480]: cluster 2026-03-09T14:37:52.610120+0000 mon.a (mon.0) 870 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:53 vm07 bash[17480]: cluster 2026-03-09T14:37:52.610305+0000 mon.a (mon.0) 871 : cluster [DBG] Standby manager daemon y started 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:53 vm07 bash[17480]: audit 2026-03-09T14:37:52.612110+0000 mon.c (mon.1) 93 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:53 vm07 bash[17480]: audit 2026-03-09T14:37:52.620357+0000 mon.c (mon.1) 94 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:53 vm07 bash[17480]: audit 2026-03-09T14:37:52.622433+0000 mon.c (mon.1) 95 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:53 vm07 bash[17480]: audit 2026-03-09T14:37:52.622978+0000 mon.c (mon.1) 96 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:53 vm07 bash[17480]: audit 2026-03-09T14:37:52.986718+0000 mon.b (mon.2) 113 : audit [DBG] from='client.? 192.168.123.107:0/2719485380' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:53 vm07 bash[22585]: audit 2026-03-09T14:37:51.851622+0000 mgr.x (mgr.24889) 76 : audit [DBG] from='client.15093 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:53 vm07 bash[22585]: audit 2026-03-09T14:37:52.552379+0000 mon.a (mon.0) 869 : audit [DBG] from='client.? 192.168.123.107:0/79944011' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:53 vm07 bash[22585]: cluster 2026-03-09T14:37:52.610120+0000 mon.a (mon.0) 870 : cluster [DBG] Standby manager daemon y restarted 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:53 vm07 bash[22585]: cluster 2026-03-09T14:37:52.610305+0000 mon.a (mon.0) 871 : cluster [DBG] Standby manager daemon y started 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:53 vm07 bash[22585]: audit 2026-03-09T14:37:52.612110+0000 mon.c (mon.1) 93 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:53 vm07 bash[22585]: audit 2026-03-09T14:37:52.620357+0000 mon.c (mon.1) 94 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:53 vm07 bash[22585]: audit 2026-03-09T14:37:52.622433+0000 mon.c (mon.1) 95 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:53 vm07 bash[22585]: audit 2026-03-09T14:37:52.622978+0000 mon.c (mon.1) 96 : audit [DBG] from='mgr.? 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:37:53.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:53 vm07 bash[22585]: audit 2026-03-09T14:37:52.986718+0000 mon.b (mon.2) 113 : audit [DBG] from='client.? 192.168.123.107:0/2719485380' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:37:54.004 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:37:53 vm11 bash[37598]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:37:53] "GET /metrics HTTP/1.1" 200 37765 "" "Prometheus/2.51.0" 2026-03-09T14:37:54.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:54 vm07 bash[17480]: audit 2026-03-09T14:37:52.079889+0000 mgr.x (mgr.24889) 77 : audit [DBG] from='client.24845 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:54.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:54 vm07 bash[17480]: audit 2026-03-09T14:37:52.302220+0000 mgr.x (mgr.24889) 78 : audit [DBG] from='client.15102 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:54.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:54 vm07 bash[17480]: audit 2026-03-09T14:37:52.761396+0000 mgr.x (mgr.24889) 79 : audit [DBG] from='client.25012 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:54.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:54 vm07 bash[17480]: cluster 2026-03-09T14:37:53.029412+0000 mon.a (mon.0) 872 : cluster [DBG] mgrmap e27: x(active, since 34s), standbys: y 2026-03-09T14:37:54.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:54 vm07 bash[17480]: cluster 2026-03-09T14:37:53.056352+0000 mgr.x (mgr.24889) 80 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:54.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:54 vm07 bash[22585]: audit 2026-03-09T14:37:52.079889+0000 mgr.x (mgr.24889) 77 : audit [DBG] from='client.24845 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:54.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:54 vm07 bash[22585]: audit 2026-03-09T14:37:52.302220+0000 mgr.x (mgr.24889) 78 : audit [DBG] from='client.15102 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:54.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:54 vm07 bash[22585]: audit 2026-03-09T14:37:52.761396+0000 mgr.x (mgr.24889) 79 : audit [DBG] from='client.25012 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:54.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:54 vm07 bash[22585]: cluster 2026-03-09T14:37:53.029412+0000 mon.a (mon.0) 872 : cluster [DBG] mgrmap e27: x(active, since 34s), standbys: y 2026-03-09T14:37:54.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:54 vm07 bash[22585]: cluster 2026-03-09T14:37:53.056352+0000 mgr.x (mgr.24889) 80 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:54.473 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:54 vm11 bash[17885]: audit 2026-03-09T14:37:52.079889+0000 mgr.x (mgr.24889) 77 : audit [DBG] from='client.24845 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:54.473 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:54 vm11 bash[17885]: audit 2026-03-09T14:37:52.302220+0000 mgr.x (mgr.24889) 78 : audit [DBG] from='client.15102 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:54.473 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:54 vm11 bash[17885]: audit 2026-03-09T14:37:52.761396+0000 mgr.x (mgr.24889) 79 : audit [DBG] from='client.25012 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:37:54.473 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:54 vm11 bash[17885]: cluster 2026-03-09T14:37:53.029412+0000 mon.a (mon.0) 872 : cluster [DBG] mgrmap e27: x(active, since 34s), standbys: y 2026-03-09T14:37:54.473 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:54 vm11 bash[17885]: cluster 2026-03-09T14:37:53.056352+0000 mgr.x (mgr.24889) 80 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:37:54.473 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:37:54 vm11 bash[40106]: ts=2026-03-09T14:37:54.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:37:56.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:55 vm11 bash[17885]: audit 2026-03-09T14:37:54.667281+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:55 vm11 bash[17885]: audit 2026-03-09T14:37:54.676850+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:55 vm11 bash[17885]: audit 2026-03-09T14:37:54.802502+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:55 vm11 bash[17885]: audit 2026-03-09T14:37:54.809490+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:55 vm11 bash[17885]: cluster 2026-03-09T14:37:55.056701+0000 mgr.x (mgr.24889) 81 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 539 B/s rd, 0 op/s 2026-03-09T14:37:56.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:55 vm11 bash[17885]: audit 2026-03-09T14:37:55.362769+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:55 vm11 bash[17885]: audit 2026-03-09T14:37:55.370224+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:55 vm07 bash[17480]: audit 2026-03-09T14:37:54.667281+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:55 vm07 bash[17480]: audit 2026-03-09T14:37:54.676850+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:55 vm07 bash[17480]: audit 2026-03-09T14:37:54.802502+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:55 vm07 bash[17480]: audit 2026-03-09T14:37:54.809490+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:55 vm07 bash[17480]: cluster 2026-03-09T14:37:55.056701+0000 mgr.x (mgr.24889) 81 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 539 B/s rd, 0 op/s 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:55 vm07 bash[17480]: audit 2026-03-09T14:37:55.362769+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:55 vm07 bash[17480]: audit 2026-03-09T14:37:55.370224+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:55 vm07 bash[22585]: audit 2026-03-09T14:37:54.667281+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:55 vm07 bash[22585]: audit 2026-03-09T14:37:54.676850+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:55 vm07 bash[22585]: audit 2026-03-09T14:37:54.802502+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:55 vm07 bash[22585]: audit 2026-03-09T14:37:54.809490+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:55 vm07 bash[22585]: cluster 2026-03-09T14:37:55.056701+0000 mgr.x (mgr.24889) 81 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 539 B/s rd, 0 op/s 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:55 vm07 bash[22585]: audit 2026-03-09T14:37:55.362769+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:56.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:55 vm07 bash[22585]: audit 2026-03-09T14:37:55.370224+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:37:57.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:57 vm07 bash[17480]: cluster 2026-03-09T14:37:57.057298+0000 mgr.x (mgr.24889) 82 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 977 B/s rd, 0 op/s 2026-03-09T14:37:57.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:57 vm07 bash[22585]: cluster 2026-03-09T14:37:57.057298+0000 mgr.x (mgr.24889) 82 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 977 B/s rd, 0 op/s 2026-03-09T14:37:57.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:57 vm11 bash[17885]: cluster 2026-03-09T14:37:57.057298+0000 mgr.x (mgr.24889) 82 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 977 B/s rd, 0 op/s 2026-03-09T14:37:59.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:37:59 vm07 bash[17480]: cluster 2026-03-09T14:37:59.057618+0000 mgr.x (mgr.24889) 83 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:59.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:37:59 vm07 bash[22585]: cluster 2026-03-09T14:37:59.057618+0000 mgr.x (mgr.24889) 83 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:37:59.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:37:59 vm11 bash[17885]: cluster 2026-03-09T14:37:59.057618+0000 mgr.x (mgr.24889) 83 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:00.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:00 vm07 bash[17480]: audit 2026-03-09T14:37:59.285941+0000 mgr.x (mgr.24889) 84 : audit [DBG] from='client.15045 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:00.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:00 vm07 bash[22585]: audit 2026-03-09T14:37:59.285941+0000 mgr.x (mgr.24889) 84 : audit [DBG] from='client.15045 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:00.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:00 vm11 bash[17885]: audit 2026-03-09T14:37:59.285941+0000 mgr.x (mgr.24889) 84 : audit [DBG] from='client.15045 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:02.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:02 vm11 bash[17885]: cluster 2026-03-09T14:38:01.058215+0000 mgr.x (mgr.24889) 85 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:02.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:02 vm11 bash[17885]: audit 2026-03-09T14:38:01.996867+0000 mon.a (mon.0) 879 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:38:02.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:02 vm07 bash[22585]: cluster 2026-03-09T14:38:01.058215+0000 mgr.x (mgr.24889) 85 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:02.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:02 vm07 bash[22585]: audit 2026-03-09T14:38:01.996867+0000 mon.a (mon.0) 879 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:38:02.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:02 vm07 bash[17480]: cluster 2026-03-09T14:38:01.058215+0000 mgr.x (mgr.24889) 85 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:02.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:02 vm07 bash[17480]: audit 2026-03-09T14:38:01.996867+0000 mon.a (mon.0) 879 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:03 vm11 bash[37598]: ignoring --setuser ceph since I am not root 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:03 vm11 bash[37598]: ignoring --setgroup ceph since I am not root 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:03 vm11 bash[37598]: debug 2026-03-09T14:38:03.146+0000 7f75360cc140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:03 vm11 bash[37598]: debug 2026-03-09T14:38:03.182+0000 7f75360cc140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:03 vm11 bash[17885]: audit 2026-03-09T14:38:02.016256+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:03 vm11 bash[17885]: audit 2026-03-09T14:38:02.017472+0000 mon.c (mon.1) 97 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:03 vm11 bash[17885]: audit 2026-03-09T14:38:02.018393+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:03 vm11 bash[17885]: audit 2026-03-09T14:38:02.024414+0000 mon.a (mon.0) 881 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:03 vm11 bash[17885]: audit 2026-03-09T14:38:02.066479+0000 mon.c (mon.1) 99 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:03 vm11 bash[17885]: audit 2026-03-09T14:38:02.069913+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:03 vm11 bash[17885]: audit 2026-03-09T14:38:02.070240+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-09T14:38:03.296 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:03 vm11 bash[17885]: cluster 2026-03-09T14:38:02.079478+0000 mon.a (mon.0) 883 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:03.332 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:03 vm07 bash[22585]: audit 2026-03-09T14:38:02.016256+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:38:03.332 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:03 vm07 bash[22585]: audit 2026-03-09T14:38:02.017472+0000 mon.c (mon.1) 97 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:03.332 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:03 vm07 bash[22585]: audit 2026-03-09T14:38:02.018393+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:03.332 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:03 vm07 bash[22585]: audit 2026-03-09T14:38:02.024414+0000 mon.a (mon.0) 881 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:38:03.332 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:03 vm07 bash[22585]: audit 2026-03-09T14:38:02.066479+0000 mon.c (mon.1) 99 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:03.332 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:03 vm07 bash[22585]: audit 2026-03-09T14:38:02.069913+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:03 vm07 bash[22585]: audit 2026-03-09T14:38:02.070240+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:03 vm07 bash[22585]: cluster 2026-03-09T14:38:02.079478+0000 mon.a (mon.0) 883 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[17480]: audit 2026-03-09T14:38:02.016256+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[17480]: audit 2026-03-09T14:38:02.017472+0000 mon.c (mon.1) 97 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[17480]: audit 2026-03-09T14:38:02.018393+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[17480]: audit 2026-03-09T14:38:02.024414+0000 mon.a (mon.0) 881 : audit [INF] from='mgr.24889 ' entity='mgr.x' 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[17480]: audit 2026-03-09T14:38:02.066479+0000 mon.c (mon.1) 99 : audit [DBG] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[17480]: audit 2026-03-09T14:38:02.069913+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.24889 192.168.123.111:0/1706674727' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[17480]: audit 2026-03-09T14:38:02.070240+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[17480]: cluster 2026-03-09T14:38:02.079478+0000 mon.a (mon.0) 883 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:03 vm07 bash[52213]: [09/Mar/2026:14:38:03] ENGINE Bus STOPPING 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:03 vm07 bash[52213]: [09/Mar/2026:14:38:03] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:38:03.333 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:03 vm07 bash[52213]: [09/Mar/2026:14:38:03] ENGINE Bus STOPPED 2026-03-09T14:38:03.551 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:03 vm11 bash[37598]: debug 2026-03-09T14:38:03.294+0000 7f75360cc140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:38:03.588 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[51060]: ts=2026-03-09T14:38:03.512Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://vm07.local:8443/api/prometheus_receiver\": dial tcp 192.168.123.107:8443: connect: connection refused" 2026-03-09T14:38:03.588 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[51060]: ts=2026-03-09T14:38:03.512Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://vm11.local:8443/api/prometheus_receiver\": dial tcp 192.168.123.111:8443: connect: connection refused" 2026-03-09T14:38:03.589 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:03 vm07 bash[52213]: [09/Mar/2026:14:38:03] ENGINE Bus STARTING 2026-03-09T14:38:03.589 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:03 vm07 bash[52213]: [09/Mar/2026:14:38:03] ENGINE Serving on http://:::9283 2026-03-09T14:38:03.589 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:03 vm07 bash[52213]: [09/Mar/2026:14:38:03] ENGINE Bus STARTED 2026-03-09T14:38:04.004 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:03 vm11 bash[37598]: debug 2026-03-09T14:38:03.574+0000 7f75360cc140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.034086+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: cluster 2026-03-09T14:38:03.034173+0000 mon.a (mon.0) 885 : cluster [DBG] mgrmap e28: y(active, starting, since 0.962988s) 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.045402+0000 mon.c (mon.1) 101 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.045796+0000 mon.c (mon.1) 102 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.047245+0000 mon.c (mon.1) 103 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.047696+0000 mon.c (mon.1) 104 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.048186+0000 mon.c (mon.1) 105 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.048631+0000 mon.c (mon.1) 106 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.049122+0000 mon.c (mon.1) 107 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.049612+0000 mon.c (mon.1) 108 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.050126+0000 mon.c (mon.1) 109 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.050603+0000 mon.c (mon.1) 110 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.051053+0000 mon.c (mon.1) 111 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.051589+0000 mon.c (mon.1) 112 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.052128+0000 mon.c (mon.1) 113 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.052553+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.053118+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: cluster 2026-03-09T14:38:03.305680+0000 mon.a (mon.0) 886 : cluster [INF] Manager daemon y is now available 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.336474+0000 mon.c (mon.1) 116 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.362135+0000 mon.c (mon.1) 117 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.362534+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.396424+0000 mon.c (mon.1) 118 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:04.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:04 vm07 bash[22585]: audit 2026-03-09T14:38:03.396954+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:38:03 vm07 bash[51060]: ts=2026-03-09T14:38:03.987Z caller=notify.go:743 level=info component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify success" attempts=2 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.034086+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: cluster 2026-03-09T14:38:03.034173+0000 mon.a (mon.0) 885 : cluster [DBG] mgrmap e28: y(active, starting, since 0.962988s) 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.045402+0000 mon.c (mon.1) 101 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.045796+0000 mon.c (mon.1) 102 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.047245+0000 mon.c (mon.1) 103 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.047696+0000 mon.c (mon.1) 104 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.048186+0000 mon.c (mon.1) 105 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.048631+0000 mon.c (mon.1) 106 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.049122+0000 mon.c (mon.1) 107 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.049612+0000 mon.c (mon.1) 108 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.050126+0000 mon.c (mon.1) 109 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.050603+0000 mon.c (mon.1) 110 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.051053+0000 mon.c (mon.1) 111 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.051589+0000 mon.c (mon.1) 112 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.052128+0000 mon.c (mon.1) 113 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.052553+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.053118+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: cluster 2026-03-09T14:38:03.305680+0000 mon.a (mon.0) 886 : cluster [INF] Manager daemon y is now available 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.336474+0000 mon.c (mon.1) 116 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.362135+0000 mon.c (mon.1) 117 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.362534+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.396424+0000 mon.c (mon.1) 118 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:04 vm07 bash[17480]: audit 2026-03-09T14:38:03.396954+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: debug 2026-03-09T14:38:04.034+0000 7f75360cc140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: debug 2026-03-09T14:38:04.134+0000 7f75360cc140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: from numpy import show_config as show_numpy_config 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: debug 2026-03-09T14:38:04.270+0000 7f75360cc140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.034086+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24889 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: cluster 2026-03-09T14:38:03.034173+0000 mon.a (mon.0) 885 : cluster [DBG] mgrmap e28: y(active, starting, since 0.962988s) 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.045402+0000 mon.c (mon.1) 101 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.045796+0000 mon.c (mon.1) 102 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.047245+0000 mon.c (mon.1) 103 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.047696+0000 mon.c (mon.1) 104 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.048186+0000 mon.c (mon.1) 105 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.048631+0000 mon.c (mon.1) 106 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.049122+0000 mon.c (mon.1) 107 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.049612+0000 mon.c (mon.1) 108 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.050126+0000 mon.c (mon.1) 109 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.050603+0000 mon.c (mon.1) 110 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:38:04.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.051053+0000 mon.c (mon.1) 111 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:38:04.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.051589+0000 mon.c (mon.1) 112 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:38:04.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.052128+0000 mon.c (mon.1) 113 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:38:04.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.052553+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:38:04.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.053118+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:38:04.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: cluster 2026-03-09T14:38:03.305680+0000 mon.a (mon.0) 886 : cluster [INF] Manager daemon y is now available 2026-03-09T14:38:04.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.336474+0000 mon.c (mon.1) 116 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:04.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.362135+0000 mon.c (mon.1) 117 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:04.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.362534+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:04.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.396424+0000 mon.c (mon.1) 118 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:04.409 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:04 vm11 bash[17885]: audit 2026-03-09T14:38:03.396954+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:04.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: debug 2026-03-09T14:38:04.406+0000 7f75360cc140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:38:04.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: debug 2026-03-09T14:38:04.446+0000 7f75360cc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:38:04.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: debug 2026-03-09T14:38:04.482+0000 7f75360cc140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:38:04.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: debug 2026-03-09T14:38:04.522+0000 7f75360cc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:38:04.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:04 vm11 bash[37598]: debug 2026-03-09T14:38:04.574+0000 7f75360cc140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:38:05.288 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.010+0000 7f75360cc140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:38:05.288 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.058+0000 7f75360cc140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:38:05.288 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.102+0000 7f75360cc140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:38:05.288 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.242+0000 7f75360cc140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:38:05.288 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:05 vm11 bash[17885]: cluster 2026-03-09T14:38:04.059777+0000 mon.a (mon.0) 889 : cluster [DBG] mgrmap e29: y(active, since 1.98858s) 2026-03-09T14:38:05.288 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:05 vm11 bash[17885]: cluster 2026-03-09T14:38:04.074619+0000 mgr.y (mgr.24991) 1 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:05.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:05 vm07 bash[22585]: cluster 2026-03-09T14:38:04.059777+0000 mon.a (mon.0) 889 : cluster [DBG] mgrmap e29: y(active, since 1.98858s) 2026-03-09T14:38:05.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:05 vm07 bash[22585]: cluster 2026-03-09T14:38:04.074619+0000 mgr.y (mgr.24991) 1 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:05.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:05 vm07 bash[17480]: cluster 2026-03-09T14:38:04.059777+0000 mon.a (mon.0) 889 : cluster [DBG] mgrmap e29: y(active, since 1.98858s) 2026-03-09T14:38:05.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:05 vm07 bash[17480]: cluster 2026-03-09T14:38:04.074619+0000 mgr.y (mgr.24991) 1 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:05.598 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.286+0000 7f75360cc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:38:05.599 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.334+0000 7f75360cc140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:38:05.599 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.442+0000 7f75360cc140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:38:05.857 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.598+0000 7f75360cc140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:38:05.857 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.778+0000 7f75360cc140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:38:05.857 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.814+0000 7f75360cc140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:38:06.252 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:05 vm11 bash[37598]: debug 2026-03-09T14:38:05.854+0000 7f75360cc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:38:06.252 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:06 vm11 bash[37598]: debug 2026-03-09T14:38:06.010+0000 7f75360cc140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:38:06.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:06 vm11 bash[17885]: cephadm 2026-03-09T14:38:04.962798+0000 mgr.y (mgr.24991) 2 : cephadm [INF] [09/Mar/2026:14:38:04] ENGINE Bus STARTING 2026-03-09T14:38:06.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:06 vm11 bash[17885]: cluster 2026-03-09T14:38:05.049956+0000 mgr.y (mgr.24991) 3 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:06.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:06 vm11 bash[17885]: cluster 2026-03-09T14:38:05.061444+0000 mon.a (mon.0) 890 : cluster [DBG] mgrmap e30: y(active, since 2s) 2026-03-09T14:38:06.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:06 vm11 bash[17885]: cephadm 2026-03-09T14:38:05.064738+0000 mgr.y (mgr.24991) 4 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T14:38:06.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:06 vm11 bash[17885]: cephadm 2026-03-09T14:38:05.173004+0000 mgr.y (mgr.24991) 5 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:38:06.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:06 vm11 bash[17885]: cephadm 2026-03-09T14:38:05.173072+0000 mgr.y (mgr.24991) 6 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Bus STARTED 2026-03-09T14:38:06.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:06 vm11 bash[17885]: cephadm 2026-03-09T14:38:05.173472+0000 mgr.y (mgr.24991) 7 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Client ('192.168.123.107', 47774) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:06 vm07 bash[22585]: cephadm 2026-03-09T14:38:04.962798+0000 mgr.y (mgr.24991) 2 : cephadm [INF] [09/Mar/2026:14:38:04] ENGINE Bus STARTING 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:06 vm07 bash[22585]: cluster 2026-03-09T14:38:05.049956+0000 mgr.y (mgr.24991) 3 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:06 vm07 bash[22585]: cluster 2026-03-09T14:38:05.061444+0000 mon.a (mon.0) 890 : cluster [DBG] mgrmap e30: y(active, since 2s) 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:06 vm07 bash[22585]: cephadm 2026-03-09T14:38:05.064738+0000 mgr.y (mgr.24991) 4 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:06 vm07 bash[22585]: cephadm 2026-03-09T14:38:05.173004+0000 mgr.y (mgr.24991) 5 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:06 vm07 bash[22585]: cephadm 2026-03-09T14:38:05.173072+0000 mgr.y (mgr.24991) 6 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Bus STARTED 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:06 vm07 bash[22585]: cephadm 2026-03-09T14:38:05.173472+0000 mgr.y (mgr.24991) 7 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Client ('192.168.123.107', 47774) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:06 vm07 bash[17480]: cephadm 2026-03-09T14:38:04.962798+0000 mgr.y (mgr.24991) 2 : cephadm [INF] [09/Mar/2026:14:38:04] ENGINE Bus STARTING 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:06 vm07 bash[17480]: cluster 2026-03-09T14:38:05.049956+0000 mgr.y (mgr.24991) 3 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:06 vm07 bash[17480]: cluster 2026-03-09T14:38:05.061444+0000 mon.a (mon.0) 890 : cluster [DBG] mgrmap e30: y(active, since 2s) 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:06 vm07 bash[17480]: cephadm 2026-03-09T14:38:05.064738+0000 mgr.y (mgr.24991) 4 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:06 vm07 bash[17480]: cephadm 2026-03-09T14:38:05.173004+0000 mgr.y (mgr.24991) 5 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:06 vm07 bash[17480]: cephadm 2026-03-09T14:38:05.173072+0000 mgr.y (mgr.24991) 6 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Bus STARTED 2026-03-09T14:38:06.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:06 vm07 bash[17480]: cephadm 2026-03-09T14:38:05.173472+0000 mgr.y (mgr.24991) 7 : cephadm [INF] [09/Mar/2026:14:38:05] ENGINE Client ('192.168.123.107', 47774) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:38:06.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:06 vm11 bash[37598]: debug 2026-03-09T14:38:06.250+0000 7f75360cc140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:38:06.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:06 vm11 bash[37598]: [09/Mar/2026:14:38:06] ENGINE Bus STARTING 2026-03-09T14:38:06.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:06 vm11 bash[37598]: CherryPy Checker: 2026-03-09T14:38:06.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:06 vm11 bash[37598]: The Application mounted at '' has an empty config. 2026-03-09T14:38:06.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:06 vm11 bash[37598]: [09/Mar/2026:14:38:06] ENGINE Serving on http://:::9283 2026-03-09T14:38:06.504 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:06 vm11 bash[37598]: [09/Mar/2026:14:38:06] ENGINE Bus STARTED 2026-03-09T14:38:07.254 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:06 vm11 bash[40106]: ts=2026-03-09T14:38:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:38:07.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:07 vm11 bash[17885]: cluster 2026-03-09T14:38:06.260936+0000 mon.a (mon.0) 891 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:07.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:07 vm11 bash[17885]: audit 2026-03-09T14:38:06.263590+0000 mon.c (mon.1) 119 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:07.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:07 vm11 bash[17885]: audit 2026-03-09T14:38:06.263987+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:07.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:07 vm11 bash[17885]: audit 2026-03-09T14:38:06.265228+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:07.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:07 vm11 bash[17885]: audit 2026-03-09T14:38:06.265560+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:07.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:07 vm07 bash[22585]: cluster 2026-03-09T14:38:06.260936+0000 mon.a (mon.0) 891 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:07.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:07 vm07 bash[22585]: audit 2026-03-09T14:38:06.263590+0000 mon.c (mon.1) 119 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:07.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:07 vm07 bash[22585]: audit 2026-03-09T14:38:06.263987+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:07.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:07 vm07 bash[22585]: audit 2026-03-09T14:38:06.265228+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:07.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:07 vm07 bash[22585]: audit 2026-03-09T14:38:06.265560+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:07.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:07 vm07 bash[17480]: cluster 2026-03-09T14:38:06.260936+0000 mon.a (mon.0) 891 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:07.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:07 vm07 bash[17480]: audit 2026-03-09T14:38:06.263590+0000 mon.c (mon.1) 119 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:07.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:07 vm07 bash[17480]: audit 2026-03-09T14:38:06.263987+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:07.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:07 vm07 bash[17480]: audit 2026-03-09T14:38:06.265228+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:07.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:07 vm07 bash[17480]: audit 2026-03-09T14:38:06.265560+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.? 192.168.123.111:0/1699169454' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:07.907 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:38:07 vm07 bash[51060]: ts=2026-03-09T14:38:07.602Z caller=notify.go:743 level=info component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify success" attempts=5 2026-03-09T14:38:08.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:08 vm07 bash[22585]: cluster 2026-03-09T14:38:07.050397+0000 mgr.y (mgr.24991) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:08.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:08 vm07 bash[22585]: cluster 2026-03-09T14:38:07.085941+0000 mon.a (mon.0) 892 : cluster [DBG] mgrmap e31: y(active, since 5s), standbys: x 2026-03-09T14:38:08.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:08 vm07 bash[22585]: audit 2026-03-09T14:38:07.091533+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:38:08.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:08 vm07 bash[17480]: cluster 2026-03-09T14:38:07.050397+0000 mgr.y (mgr.24991) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:08.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:08 vm07 bash[17480]: cluster 2026-03-09T14:38:07.085941+0000 mon.a (mon.0) 892 : cluster [DBG] mgrmap e31: y(active, since 5s), standbys: x 2026-03-09T14:38:08.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:08 vm07 bash[17480]: audit 2026-03-09T14:38:07.091533+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:38:08.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:08 vm11 bash[17885]: cluster 2026-03-09T14:38:07.050397+0000 mgr.y (mgr.24991) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:08.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:08 vm11 bash[17885]: cluster 2026-03-09T14:38:07.085941+0000 mon.a (mon.0) 892 : cluster [DBG] mgrmap e31: y(active, since 5s), standbys: x 2026-03-09T14:38:08.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:08 vm11 bash[17885]: audit 2026-03-09T14:38:07.091533+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:08.998391+0000 mon.a (mon.0) 893 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:09.005947+0000 mon.a (mon.0) 894 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: cluster 2026-03-09T14:38:09.050773+0000 mgr.y (mgr.24991) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:09.140043+0000 mon.a (mon.0) 895 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:09.149912+0000 mon.a (mon.0) 896 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:09.298061+0000 mgr.y (mgr.24991) 10 : audit [DBG] from='client.15045 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:09.595561+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:09.602208+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:09.604103+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:09.604353+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:09.767364+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:09 vm11 bash[17885]: audit 2026-03-09T14:38:09.773646+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:08.998391+0000 mon.a (mon.0) 893 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:09.005947+0000 mon.a (mon.0) 894 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: cluster 2026-03-09T14:38:09.050773+0000 mgr.y (mgr.24991) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:09.140043+0000 mon.a (mon.0) 895 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:09.149912+0000 mon.a (mon.0) 896 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:09.298061+0000 mgr.y (mgr.24991) 10 : audit [DBG] from='client.15045 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:09.595561+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:09.602208+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:09.604103+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:09.604353+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:09.767364+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:10 vm07 bash[22585]: audit 2026-03-09T14:38:09.773646+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:08.998391+0000 mon.a (mon.0) 893 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:09.005947+0000 mon.a (mon.0) 894 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: cluster 2026-03-09T14:38:09.050773+0000 mgr.y (mgr.24991) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:09.140043+0000 mon.a (mon.0) 895 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:09.149912+0000 mon.a (mon.0) 896 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:09.298061+0000 mgr.y (mgr.24991) 10 : audit [DBG] from='client.15045 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:09.595561+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:09.602208+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:09.604103+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:10.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:09.604353+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:10.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:09.767364+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:10.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:09 vm07 bash[17480]: audit 2026-03-09T14:38:09.773646+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:12.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:12 vm11 bash[17885]: cluster 2026-03-09T14:38:11.051286+0000 mgr.y (mgr.24991) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:38:12.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:12 vm07 bash[22585]: cluster 2026-03-09T14:38:11.051286+0000 mgr.y (mgr.24991) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:38:12.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:12 vm07 bash[17480]: cluster 2026-03-09T14:38:11.051286+0000 mgr.y (mgr.24991) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:38:14.004 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:13 vm11 bash[37598]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:38:13] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.51.0" 2026-03-09T14:38:14.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:14 vm07 bash[22585]: cluster 2026-03-09T14:38:13.051579+0000 mgr.y (mgr.24991) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:38:14.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:14 vm07 bash[17480]: cluster 2026-03-09T14:38:13.051579+0000 mgr.y (mgr.24991) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:38:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:14 vm11 bash[17885]: cluster 2026-03-09T14:38:13.051579+0000 mgr.y (mgr.24991) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:38:16.368 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:16 vm07 bash[22585]: cluster 2026-03-09T14:38:15.052121+0000 mgr.y (mgr.24991) 13 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:38:16.368 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:16 vm07 bash[17480]: cluster 2026-03-09T14:38:15.052121+0000 mgr.y (mgr.24991) 13 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:38:16.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:16 vm11 bash[17885]: cluster 2026-03-09T14:38:15.052121+0000 mgr.y (mgr.24991) 13 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:38:17.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:16 vm11 bash[40106]: ts=2026-03-09T14:38:16.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.428544+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.435841+0000 mon.a (mon.0) 903 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.438603+0000 mon.c (mon.1) 125 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.438831+0000 mon.a (mon.0) 904 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.439551+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.440061+0000 mon.c (mon.1) 127 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.581809+0000 mon.a (mon.0) 905 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.587457+0000 mon.a (mon.0) 906 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.596513+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.602238+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.607204+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.619827+0000 mon.c (mon.1) 128 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.620140+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:16.623943+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:17.088296+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:17 vm11 bash[17885]: audit 2026-03-09T14:38:17.093757+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.428544+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.435841+0000 mon.a (mon.0) 903 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.438603+0000 mon.c (mon.1) 125 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.438831+0000 mon.a (mon.0) 904 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.439551+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.440061+0000 mon.c (mon.1) 127 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.581809+0000 mon.a (mon.0) 905 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.587457+0000 mon.a (mon.0) 906 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.596513+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.602238+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.607204+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.619827+0000 mon.c (mon.1) 128 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.620140+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:16.623943+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:17.088296+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:17 vm07 bash[22585]: audit 2026-03-09T14:38:17.093757+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.428544+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.435841+0000 mon.a (mon.0) 903 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.438603+0000 mon.c (mon.1) 125 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.438831+0000 mon.a (mon.0) 904 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.439551+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.440061+0000 mon.c (mon.1) 127 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.581809+0000 mon.a (mon.0) 905 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.587457+0000 mon.a (mon.0) 906 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.596513+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.602238+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.607204+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.619827+0000 mon.c (mon.1) 128 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.620140+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:16.623943+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:17.863 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:17.088296+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:17.864 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:17 vm07 bash[17480]: audit 2026-03-09T14:38:17.093757+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 systemd[1]: Stopping Ceph prometheus.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.837Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.837Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.837Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.837Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.837Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.837Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.837Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.837Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.838Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.839Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.839Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[40106]: ts=2026-03-09T14:38:17.839Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 bash[41213]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-prometheus-a 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@prometheus.a.service: Deactivated successfully. 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 systemd[1]: Stopped Ceph prometheus.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:38:18.004 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:17 vm11 systemd[1]: Started Ceph prometheus.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:38:18.284 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.036Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T14:38:18.284 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.036Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.036Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm11 (none))" 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.036Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.036Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.041Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.041Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.042Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.042Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.045Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.045Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.273µs 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.045Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.055Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=3 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.070Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=3 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.074Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=3 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.080Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=3 maxSegment=3 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.080Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=18.033µs wal_replay_duration=35.298871ms wbl_replay_duration=54.291µs total_replay_duration=35.396845ms 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.083Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.084Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.084Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.094Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=10.616525ms db_storage=1.172µs remote_storage=1.122µs web_handler=341ns query_engine=791ns scrape=798.692µs scrape_sd=94.417µs notify=7.184µs notify_sd=6.633µs rules=9.266766ms tracing=5.209µs 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.095Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T14:38:18.285 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 bash[41290]: ts=2026-03-09T14:38:18.094Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T14:38:18.570 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:16.440708+0000 mgr.y (mgr.24991) 14 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:38:18.570 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:16.440811+0000 mgr.y (mgr.24991) 15 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:38:18.570 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:16.474306+0000 mgr.y (mgr.24991) 16 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:38:18.570 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:16.477773+0000 mgr.y (mgr.24991) 17 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:38:18.570 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:16.508380+0000 mgr.y (mgr.24991) 18 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:38:18.570 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:16.513060+0000 mgr.y (mgr.24991) 19 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:38:18.570 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:16.540996+0000 mgr.y (mgr.24991) 20 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:38:18.570 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:16.547640+0000 mgr.y (mgr.24991) 21 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:38:18.570 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:16.619517+0000 mgr.y (mgr.24991) 22 : cephadm [INF] Reconfiguring iscsi.foo.vm07.ohlmos (dependencies changed)... 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:16.624664+0000 mgr.y (mgr.24991) 23 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cluster 2026-03-09T14:38:17.052507+0000 mgr.y (mgr.24991) 24 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:17.095276+0000 mgr.y (mgr.24991) 25 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: cephadm 2026-03-09T14:38:17.304324+0000 mgr.y (mgr.24991) 26 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:17.616315+0000 mon.a (mon.0) 913 : audit [DBG] from='client.? 192.168.123.107:0/2165411588' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:17.913519+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:17.919493+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:17.922071+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:17.931936+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:17.932909+0000 mon.c (mon.1) 131 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:17.934096+0000 mon.c (mon.1) 132 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:17.938639+0000 mon.a (mon.0) 917 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:17.941429+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:17.970616+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:18.336289+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:18.414697+0000 mon.c (mon.1) 136 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:18.415099+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:18.415924+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:38:18.571 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 bash[17885]: audit 2026-03-09T14:38:18.416556+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:16.440708+0000 mgr.y (mgr.24991) 14 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:16.440811+0000 mgr.y (mgr.24991) 15 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:16.474306+0000 mgr.y (mgr.24991) 16 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:16.477773+0000 mgr.y (mgr.24991) 17 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:16.508380+0000 mgr.y (mgr.24991) 18 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:16.513060+0000 mgr.y (mgr.24991) 19 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:16.540996+0000 mgr.y (mgr.24991) 20 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:16.547640+0000 mgr.y (mgr.24991) 21 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:16.619517+0000 mgr.y (mgr.24991) 22 : cephadm [INF] Reconfiguring iscsi.foo.vm07.ohlmos (dependencies changed)... 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:16.624664+0000 mgr.y (mgr.24991) 23 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cluster 2026-03-09T14:38:17.052507+0000 mgr.y (mgr.24991) 24 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:17.095276+0000 mgr.y (mgr.24991) 25 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: cephadm 2026-03-09T14:38:17.304324+0000 mgr.y (mgr.24991) 26 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:17.616315+0000 mon.a (mon.0) 913 : audit [DBG] from='client.? 192.168.123.107:0/2165411588' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:17.913519+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:17.919493+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:17.922071+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:17.931936+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:17.932909+0000 mon.c (mon.1) 131 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:17.934096+0000 mon.c (mon.1) 132 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:17.938639+0000 mon.a (mon.0) 917 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:17.941429+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:17.970616+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:18.336289+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:18.414697+0000 mon.c (mon.1) 136 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:18.415099+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:18.415924+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:38:18.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:18 vm07 bash[22585]: audit 2026-03-09T14:38:18.416556+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:16.440708+0000 mgr.y (mgr.24991) 14 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:16.440811+0000 mgr.y (mgr.24991) 15 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:16.474306+0000 mgr.y (mgr.24991) 16 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:16.477773+0000 mgr.y (mgr.24991) 17 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.conf 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:16.508380+0000 mgr.y (mgr.24991) 18 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:16.513060+0000 mgr.y (mgr.24991) 19 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:16.540996+0000 mgr.y (mgr.24991) 20 : cephadm [INF] Updating vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:16.547640+0000 mgr.y (mgr.24991) 21 : cephadm [INF] Updating vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/config/ceph.client.admin.keyring 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:16.619517+0000 mgr.y (mgr.24991) 22 : cephadm [INF] Reconfiguring iscsi.foo.vm07.ohlmos (dependencies changed)... 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:16.624664+0000 mgr.y (mgr.24991) 23 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cluster 2026-03-09T14:38:17.052507+0000 mgr.y (mgr.24991) 24 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:17.095276+0000 mgr.y (mgr.24991) 25 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: cephadm 2026-03-09T14:38:17.304324+0000 mgr.y (mgr.24991) 26 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:17.616315+0000 mon.a (mon.0) 913 : audit [DBG] from='client.? 192.168.123.107:0/2165411588' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:17.913519+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:17.919493+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:17.922071+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:17.931936+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:17.932909+0000 mon.c (mon.1) 131 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:17.934096+0000 mon.c (mon.1) 132 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:17.938639+0000 mon.a (mon.0) 917 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:17.941429+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:17.970616+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:18.336289+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:18.414697+0000 mon.c (mon.1) 136 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:18.415099+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:18.415924+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:38:18.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:18 vm07 bash[17480]: audit 2026-03-09T14:38:18.416556+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:19.206 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:38:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.206 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:38:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.206 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.207 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:38:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.207 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:38:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.207 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:38:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.207 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.207 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.207 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:18 vm11 systemd[1]: Stopping Ceph mgr.x for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:38:19.207 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:19 vm11 bash[41567]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mgr-x 2026-03-09T14:38:19.207 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-09T14:38:19.207 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.x.service: Failed with result 'exit-code'. 2026-03-09T14:38:19.207 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: Stopped Ceph mgr.x for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:38:19.207 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:38:18 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.501 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.501 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.501 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.501 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.501 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.501 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.501 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: Started Ceph mgr.x for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:19 vm11 bash[41682]: debug 2026-03-09T14:38:19.502+0000 7f2f93e4f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 bash[17885]: audit 2026-03-09T14:38:17.922475+0000 mgr.y (mgr.24991) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 bash[17885]: cephadm 2026-03-09T14:38:17.932783+0000 mgr.y (mgr.24991) 28 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 bash[17885]: audit 2026-03-09T14:38:17.933178+0000 mgr.y (mgr.24991) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 bash[17885]: audit 2026-03-09T14:38:17.934322+0000 mgr.y (mgr.24991) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 bash[17885]: audit 2026-03-09T14:38:17.941682+0000 mgr.y (mgr.24991) 31 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 bash[17885]: cephadm 2026-03-09T14:38:18.414388+0000 mgr.y (mgr.24991) 32 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 bash[17885]: cephadm 2026-03-09T14:38:18.417209+0000 mgr.y (mgr.24991) 33 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 bash[17885]: cluster 2026-03-09T14:38:19.052820+0000 mgr.y (mgr.24991) 34 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 bash[17885]: audit 2026-03-09T14:38:19.326771+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:19 vm11 bash[17885]: audit 2026-03-09T14:38:19.334428+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:19.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:19 vm11 bash[41682]: debug 2026-03-09T14:38:19.538+0000 7f2f93e4f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:38:19.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:19 vm11 bash[41682]: debug 2026-03-09T14:38:19.666+0000 7f2f93e4f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:38:19.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:19 vm07 bash[22585]: audit 2026-03-09T14:38:17.922475+0000 mgr.y (mgr.24991) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:19 vm07 bash[22585]: cephadm 2026-03-09T14:38:17.932783+0000 mgr.y (mgr.24991) 28 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:19 vm07 bash[22585]: audit 2026-03-09T14:38:17.933178+0000 mgr.y (mgr.24991) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:19 vm07 bash[22585]: audit 2026-03-09T14:38:17.934322+0000 mgr.y (mgr.24991) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:19 vm07 bash[22585]: audit 2026-03-09T14:38:17.941682+0000 mgr.y (mgr.24991) 31 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:19 vm07 bash[22585]: cephadm 2026-03-09T14:38:18.414388+0000 mgr.y (mgr.24991) 32 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:19 vm07 bash[22585]: cephadm 2026-03-09T14:38:18.417209+0000 mgr.y (mgr.24991) 33 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:19 vm07 bash[22585]: cluster 2026-03-09T14:38:19.052820+0000 mgr.y (mgr.24991) 34 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:19 vm07 bash[22585]: audit 2026-03-09T14:38:19.326771+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:19 vm07 bash[22585]: audit 2026-03-09T14:38:19.334428+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:19 vm07 bash[17480]: audit 2026-03-09T14:38:17.922475+0000 mgr.y (mgr.24991) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:19 vm07 bash[17480]: cephadm 2026-03-09T14:38:17.932783+0000 mgr.y (mgr.24991) 28 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:19 vm07 bash[17480]: audit 2026-03-09T14:38:17.933178+0000 mgr.y (mgr.24991) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:19 vm07 bash[17480]: audit 2026-03-09T14:38:17.934322+0000 mgr.y (mgr.24991) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:19 vm07 bash[17480]: audit 2026-03-09T14:38:17.941682+0000 mgr.y (mgr.24991) 31 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:19 vm07 bash[17480]: cephadm 2026-03-09T14:38:18.414388+0000 mgr.y (mgr.24991) 32 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:19 vm07 bash[17480]: cephadm 2026-03-09T14:38:18.417209+0000 mgr.y (mgr.24991) 33 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:19 vm07 bash[17480]: cluster 2026-03-09T14:38:19.052820+0000 mgr.y (mgr.24991) 34 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:19 vm07 bash[17480]: audit 2026-03-09T14:38:19.326771+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:19.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:19 vm07 bash[17480]: audit 2026-03-09T14:38:19.334428+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:20.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:19 vm11 bash[41682]: debug 2026-03-09T14:38:19.950+0000 7f2f93e4f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:38:20.748 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: debug 2026-03-09T14:38:20.406+0000 7f2f93e4f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:38:20.748 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: debug 2026-03-09T14:38:20.486+0000 7f2f93e4f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:38:20.748 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:38:20.748 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:38:20.748 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: from numpy import show_config as show_numpy_config 2026-03-09T14:38:20.748 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: debug 2026-03-09T14:38:20.606+0000 7f2f93e4f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:38:21.003 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: debug 2026-03-09T14:38:20.746+0000 7f2f93e4f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:38:21.004 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: debug 2026-03-09T14:38:20.786+0000 7f2f93e4f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:38:21.004 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: debug 2026-03-09T14:38:20.822+0000 7f2f93e4f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:38:21.004 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: debug 2026-03-09T14:38:20.866+0000 7f2f93e4f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:38:21.004 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:20 vm11 bash[41682]: debug 2026-03-09T14:38:20.918+0000 7f2f93e4f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:38:21.596 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:21 vm11 bash[41682]: debug 2026-03-09T14:38:21.342+0000 7f2f93e4f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:38:21.596 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:21 vm11 bash[41682]: debug 2026-03-09T14:38:21.378+0000 7f2f93e4f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:38:21.596 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:21 vm11 bash[41682]: debug 2026-03-09T14:38:21.414+0000 7f2f93e4f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:38:21.596 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:21 vm11 bash[41682]: debug 2026-03-09T14:38:21.554+0000 7f2f93e4f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:38:21.900 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:21 vm11 bash[41682]: debug 2026-03-09T14:38:21.594+0000 7f2f93e4f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:38:21.900 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:21 vm11 bash[41682]: debug 2026-03-09T14:38:21.634+0000 7f2f93e4f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:38:21.900 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:21 vm11 bash[41682]: debug 2026-03-09T14:38:21.750+0000 7f2f93e4f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:38:22.253 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:21 vm11 bash[41682]: debug 2026-03-09T14:38:21.898+0000 7f2f93e4f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:38:22.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:22 vm11 bash[41682]: debug 2026-03-09T14:38:22.066+0000 7f2f93e4f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:38:22.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:22 vm11 bash[41682]: debug 2026-03-09T14:38:22.102+0000 7f2f93e4f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:38:22.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:22 vm11 bash[41682]: debug 2026-03-09T14:38:22.142+0000 7f2f93e4f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:38:22.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:22 vm11 bash[17885]: cluster 2026-03-09T14:38:21.053316+0000 mgr.y (mgr.24991) 35 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:38:22.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:22 vm07 bash[22585]: cluster 2026-03-09T14:38:21.053316+0000 mgr.y (mgr.24991) 35 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:38:22.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:22 vm07 bash[17480]: cluster 2026-03-09T14:38:21.053316+0000 mgr.y (mgr.24991) 35 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:38:22.634 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:22 vm11 bash[41682]: debug 2026-03-09T14:38:22.298+0000 7f2f93e4f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:38:22.634 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:22 vm11 bash[41682]: debug 2026-03-09T14:38:22.518+0000 7f2f93e4f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:38:22.634 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:22 vm11 bash[41682]: [09/Mar/2026:14:38:22] ENGINE Bus STARTING 2026-03-09T14:38:22.634 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:22 vm11 bash[41682]: CherryPy Checker: 2026-03-09T14:38:22.634 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:22 vm11 bash[41682]: The Application mounted at '' has an empty config. 2026-03-09T14:38:23.003 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:22 vm11 bash[41682]: [09/Mar/2026:14:38:22] ENGINE Serving on http://:::9283 2026-03-09T14:38:23.004 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:22 vm11 bash[41682]: [09/Mar/2026:14:38:22] ENGINE Bus STARTED 2026-03-09T14:38:23.233 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:38:23.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:23 vm07 bash[22585]: cluster 2026-03-09T14:38:22.529305+0000 mon.a (mon.0) 921 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:38:23.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:23 vm07 bash[22585]: cluster 2026-03-09T14:38:22.529522+0000 mon.a (mon.0) 922 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:23.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:23 vm07 bash[22585]: audit 2026-03-09T14:38:22.529931+0000 mon.a (mon.0) 923 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:23.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:23 vm07 bash[22585]: audit 2026-03-09T14:38:22.530790+0000 mon.a (mon.0) 924 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:23.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:23 vm07 bash[22585]: audit 2026-03-09T14:38:22.531876+0000 mon.a (mon.0) 925 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:23.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:23 vm07 bash[22585]: audit 2026-03-09T14:38:22.532217+0000 mon.a (mon.0) 926 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:23.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:23 vm07 bash[17480]: cluster 2026-03-09T14:38:22.529305+0000 mon.a (mon.0) 921 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:38:23.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:23 vm07 bash[17480]: cluster 2026-03-09T14:38:22.529522+0000 mon.a (mon.0) 922 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:23.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:23 vm07 bash[17480]: audit 2026-03-09T14:38:22.529931+0000 mon.a (mon.0) 923 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:23.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:23 vm07 bash[17480]: audit 2026-03-09T14:38:22.530790+0000 mon.a (mon.0) 924 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:23.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:23 vm07 bash[17480]: audit 2026-03-09T14:38:22.531876+0000 mon.a (mon.0) 925 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:23.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:23 vm07 bash[17480]: audit 2026-03-09T14:38:22.532217+0000 mon.a (mon.0) 926 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:23.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:23 vm11 bash[17885]: cluster 2026-03-09T14:38:22.529305+0000 mon.a (mon.0) 921 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:38:23.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:23 vm11 bash[17885]: cluster 2026-03-09T14:38:22.529522+0000 mon.a (mon.0) 922 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:23.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:23 vm11 bash[17885]: audit 2026-03-09T14:38:22.529931+0000 mon.a (mon.0) 923 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:23.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:23 vm11 bash[17885]: audit 2026-03-09T14:38:22.530790+0000 mon.a (mon.0) 924 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:23.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:23 vm11 bash[17885]: audit 2026-03-09T14:38:22.531876+0000 mon.a (mon.0) 925 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:23.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:23 vm11 bash[17885]: audit 2026-03-09T14:38:22.532217+0000 mon.a (mon.0) 926 : audit [DBG] from='mgr.? 192.168.123.111:0/4141057201' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (45s) 14s ago 5m 14.2M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (43s) 14s ago 5m 38.1M - dad864ee21e9 614f6a00be7a 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 starting - - - - 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 starting - - - - 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (34s) 14s ago 9m 508M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (9m) 14s ago 9m 55.4M 2048M 17.2.0 e1d6a67b021e 47602ca6fae7 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (8m) 14s ago 8m 44.7M 2048M 17.2.0 e1d6a67b021e eac3b7829b01 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (8m) 14s ago 8m 41.1M 2048M 17.2.0 e1d6a67b021e 9c901130627b 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (41s) 14s ago 5m 5344k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (39s) 14s ago 5m 5791k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (8m) 14s ago 8m 50.3M 4096M 17.2.0 e1d6a67b021e 7a4a11fbf70d 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (7m) 14s ago 7m 52.4M 4096M 17.2.0 e1d6a67b021e 15e2e23b506b 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (7m) 14s ago 7m 48.2M 4096M 17.2.0 e1d6a67b021e fe41cd2240dc 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (7m) 14s ago 7m 49.9M 4096M 17.2.0 e1d6a67b021e b07b01a0b5aa 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (7m) 14s ago 7m 51.0M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (6m) 14s ago 6m 48.4M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (6m) 14s ago 6m 48.5M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (6m) 14s ago 6m 50.0M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 starting - - - - 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (5m) 14s ago 5m 84.8M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (5m) 14s ago 5m 83.8M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (5m) 14s ago 5m 84.2M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:38:23.666 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (5m) 14s ago 5m 84.0M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:38:23.906 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:23 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:38:23] "GET /metrics HTTP/1.1" 200 37782 "" "Prometheus/2.51.0" 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "mds": {}, 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 15, 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:38:23.911 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:38:24.111 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:38:24.111 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:38:24.111 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:38:24.111 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:38:24.111 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T14:38:24.111 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "", 2026-03-09T14:38:24.111 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading mgr daemons", 2026-03-09T14:38:24.111 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:38:24.112 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:38:24.364 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:38:24.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:24 vm07 bash[17480]: cluster 2026-03-09T14:38:23.030744+0000 mon.a (mon.0) 927 : cluster [DBG] mgrmap e32: y(active, since 20s), standbys: x 2026-03-09T14:38:24.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:24 vm07 bash[17480]: cluster 2026-03-09T14:38:23.053666+0000 mgr.y (mgr.24991) 36 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T14:38:24.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:24 vm07 bash[17480]: audit 2026-03-09T14:38:23.225400+0000 mgr.y (mgr.24991) 37 : audit [DBG] from='client.15171 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:24.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:24 vm07 bash[17480]: audit 2026-03-09T14:38:23.917760+0000 mon.a (mon.0) 928 : audit [DBG] from='client.? 192.168.123.107:0/1880445683' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:24.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:24 vm07 bash[22585]: cluster 2026-03-09T14:38:23.030744+0000 mon.a (mon.0) 927 : cluster [DBG] mgrmap e32: y(active, since 20s), standbys: x 2026-03-09T14:38:24.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:24 vm07 bash[22585]: cluster 2026-03-09T14:38:23.053666+0000 mgr.y (mgr.24991) 36 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T14:38:24.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:24 vm07 bash[22585]: audit 2026-03-09T14:38:23.225400+0000 mgr.y (mgr.24991) 37 : audit [DBG] from='client.15171 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:24.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:24 vm07 bash[22585]: audit 2026-03-09T14:38:23.917760+0000 mon.a (mon.0) 928 : audit [DBG] from='client.? 192.168.123.107:0/1880445683' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:24.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:24 vm11 bash[17885]: cluster 2026-03-09T14:38:23.030744+0000 mon.a (mon.0) 927 : cluster [DBG] mgrmap e32: y(active, since 20s), standbys: x 2026-03-09T14:38:24.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:24 vm11 bash[17885]: cluster 2026-03-09T14:38:23.053666+0000 mgr.y (mgr.24991) 36 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T14:38:24.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:24 vm11 bash[17885]: audit 2026-03-09T14:38:23.225400+0000 mgr.y (mgr.24991) 37 : audit [DBG] from='client.15171 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:24.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:24 vm11 bash[17885]: audit 2026-03-09T14:38:23.917760+0000 mon.a (mon.0) 928 : audit [DBG] from='client.? 192.168.123.107:0/1880445683' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:24.504 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:24 vm11 bash[41290]: ts=2026-03-09T14:38:24.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:38:25.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:25 vm07 bash[17480]: audit 2026-03-09T14:38:23.451073+0000 mgr.y (mgr.24991) 38 : audit [DBG] from='client.15177 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:25.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:25 vm07 bash[17480]: audit 2026-03-09T14:38:23.667366+0000 mgr.y (mgr.24991) 39 : audit [DBG] from='client.24908 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:25 vm07 bash[17480]: audit 2026-03-09T14:38:24.118130+0000 mgr.y (mgr.24991) 40 : audit [DBG] from='client.15189 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:25 vm07 bash[17480]: audit 2026-03-09T14:38:24.369825+0000 mon.c (mon.1) 139 : audit [DBG] from='client.? 192.168.123.107:0/1895785990' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:25 vm07 bash[17480]: audit 2026-03-09T14:38:24.715253+0000 mon.a (mon.0) 929 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:25 vm07 bash[17480]: audit 2026-03-09T14:38:24.721434+0000 mon.a (mon.0) 930 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:25 vm07 bash[17480]: audit 2026-03-09T14:38:24.858697+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:25 vm07 bash[17480]: audit 2026-03-09T14:38:24.865873+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:25 vm07 bash[22585]: audit 2026-03-09T14:38:23.451073+0000 mgr.y (mgr.24991) 38 : audit [DBG] from='client.15177 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:25 vm07 bash[22585]: audit 2026-03-09T14:38:23.667366+0000 mgr.y (mgr.24991) 39 : audit [DBG] from='client.24908 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:25 vm07 bash[22585]: audit 2026-03-09T14:38:24.118130+0000 mgr.y (mgr.24991) 40 : audit [DBG] from='client.15189 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:25 vm07 bash[22585]: audit 2026-03-09T14:38:24.369825+0000 mon.c (mon.1) 139 : audit [DBG] from='client.? 192.168.123.107:0/1895785990' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:25 vm07 bash[22585]: audit 2026-03-09T14:38:24.715253+0000 mon.a (mon.0) 929 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:25 vm07 bash[22585]: audit 2026-03-09T14:38:24.721434+0000 mon.a (mon.0) 930 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:25 vm07 bash[22585]: audit 2026-03-09T14:38:24.858697+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:25 vm07 bash[22585]: audit 2026-03-09T14:38:24.865873+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:25 vm11 bash[17885]: audit 2026-03-09T14:38:23.451073+0000 mgr.y (mgr.24991) 38 : audit [DBG] from='client.15177 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:25.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:25 vm11 bash[17885]: audit 2026-03-09T14:38:23.667366+0000 mgr.y (mgr.24991) 39 : audit [DBG] from='client.24908 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:25.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:25 vm11 bash[17885]: audit 2026-03-09T14:38:24.118130+0000 mgr.y (mgr.24991) 40 : audit [DBG] from='client.15189 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:25.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:25 vm11 bash[17885]: audit 2026-03-09T14:38:24.369825+0000 mon.c (mon.1) 139 : audit [DBG] from='client.? 192.168.123.107:0/1895785990' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:38:25.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:25 vm11 bash[17885]: audit 2026-03-09T14:38:24.715253+0000 mon.a (mon.0) 929 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:25 vm11 bash[17885]: audit 2026-03-09T14:38:24.721434+0000 mon.a (mon.0) 930 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:25 vm11 bash[17885]: audit 2026-03-09T14:38:24.858697+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:25.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:25 vm11 bash[17885]: audit 2026-03-09T14:38:24.865873+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:26.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:26 vm07 bash[22585]: cluster 2026-03-09T14:38:25.054175+0000 mgr.y (mgr.24991) 41 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:38:26.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:26 vm07 bash[22585]: audit 2026-03-09T14:38:25.299492+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:26.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:26 vm07 bash[22585]: audit 2026-03-09T14:38:25.306194+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:26.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:26 vm07 bash[17480]: cluster 2026-03-09T14:38:25.054175+0000 mgr.y (mgr.24991) 41 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:38:26.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:26 vm07 bash[17480]: audit 2026-03-09T14:38:25.299492+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:26.657 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:26 vm07 bash[17480]: audit 2026-03-09T14:38:25.306194+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:26.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:26 vm11 bash[17885]: cluster 2026-03-09T14:38:25.054175+0000 mgr.y (mgr.24991) 41 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:38:26.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:26 vm11 bash[17885]: audit 2026-03-09T14:38:25.299492+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:26.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:26 vm11 bash[17885]: audit 2026-03-09T14:38:25.306194+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:27.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:26 vm11 bash[41290]: ts=2026-03-09T14:38:26.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:38:28.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:28 vm07 bash[22585]: cluster 2026-03-09T14:38:27.054510+0000 mgr.y (mgr.24991) 42 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T14:38:28.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:28 vm07 bash[22585]: audit 2026-03-09T14:38:27.428159+0000 mgr.y (mgr.24991) 43 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:28.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:28 vm07 bash[17480]: cluster 2026-03-09T14:38:27.054510+0000 mgr.y (mgr.24991) 42 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T14:38:28.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:28 vm07 bash[17480]: audit 2026-03-09T14:38:27.428159+0000 mgr.y (mgr.24991) 43 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:28.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:28 vm11 bash[17885]: cluster 2026-03-09T14:38:27.054510+0000 mgr.y (mgr.24991) 42 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T14:38:28.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:28 vm11 bash[17885]: audit 2026-03-09T14:38:27.428159+0000 mgr.y (mgr.24991) 43 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:30 vm11 bash[17885]: cluster 2026-03-09T14:38:29.054845+0000 mgr.y (mgr.24991) 44 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T14:38:30.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:30 vm07 bash[22585]: cluster 2026-03-09T14:38:29.054845+0000 mgr.y (mgr.24991) 44 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T14:38:30.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:30 vm07 bash[17480]: cluster 2026-03-09T14:38:29.054845+0000 mgr.y (mgr.24991) 44 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: cluster 2026-03-09T14:38:31.055337+0000 mgr.y (mgr.24991) 45 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.897260+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.905532+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.909239+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.910012+0000 mon.c (mon.1) 141 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.915018+0000 mon.a (mon.0) 937 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.954825+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.956007+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.962687+0000 mon.a (mon.0) 938 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.963633+0000 mon.c (mon.1) 144 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.963811+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.969219+0000 mon.a (mon.0) 940 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.970247+0000 mon.c (mon.1) 145 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.970425+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.976009+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.977080+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:38:32.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:32 vm11 bash[17885]: audit 2026-03-09T14:38:31.977577+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: cluster 2026-03-09T14:38:31.055337+0000 mgr.y (mgr.24991) 45 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.897260+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.905532+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.909239+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.910012+0000 mon.c (mon.1) 141 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.915018+0000 mon.a (mon.0) 937 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.954825+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.956007+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.962687+0000 mon.a (mon.0) 938 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.963633+0000 mon.c (mon.1) 144 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.963811+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.969219+0000 mon.a (mon.0) 940 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.970247+0000 mon.c (mon.1) 145 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.970425+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.976009+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.977080+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 bash[17480]: audit 2026-03-09T14:38:31.977577+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: cluster 2026-03-09T14:38:31.055337+0000 mgr.y (mgr.24991) 45 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.897260+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.905532+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.909239+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.910012+0000 mon.c (mon.1) 141 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.915018+0000 mon.a (mon.0) 937 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.954825+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.956007+0000 mon.c (mon.1) 143 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.962687+0000 mon.a (mon.0) 938 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.963633+0000 mon.c (mon.1) 144 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.963811+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.969219+0000 mon.a (mon.0) 940 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.970247+0000 mon.c (mon.1) 145 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.970425+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.976009+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24991 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.977080+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:38:32.275 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 bash[22585]: audit 2026-03-09T14:38:31.977577+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-09T14:38:33.013 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.014 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:33 vm07 bash[17480]: cephadm 2026-03-09T14:38:31.956489+0000 mgr.y (mgr.24991) 46 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T14:38:33.014 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:33 vm07 bash[17480]: cephadm 2026-03-09T14:38:31.977971+0000 mgr.y (mgr.24991) 47 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-09T14:38:33.014 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:33 vm07 bash[17480]: audit 2026-03-09T14:38:32.448244+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:33.014 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:33 vm07 bash[17480]: audit 2026-03-09T14:38:32.452044+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:38:33.014 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:33 vm07 bash[17480]: audit 2026-03-09T14:38:32.452912+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:38:33.014 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:33 vm07 bash[17480]: audit 2026-03-09T14:38:32.453782+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:33.014 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:38:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.014 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:38:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.014 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:38:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.014 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:38:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.014 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.014 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:38:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.014 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:38:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.014 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:32 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:33 vm07 bash[52213]: [09/Mar/2026:14:38:33] ENGINE Bus STOPPING 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[22585]: cephadm 2026-03-09T14:38:31.956489+0000 mgr.y (mgr.24991) 46 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[22585]: cephadm 2026-03-09T14:38:31.977971+0000 mgr.y (mgr.24991) 47 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[22585]: audit 2026-03-09T14:38:32.448244+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[22585]: audit 2026-03-09T14:38:32.452044+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[22585]: audit 2026-03-09T14:38:32.452912+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[22585]: audit 2026-03-09T14:38:32.453782+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: Stopping Ceph mon.c for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[22585]: debug 2026-03-09T14:38:33.064+0000 7f6fb115c700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[22585]: debug 2026-03-09T14:38:33.064+0000 7f6fb115c700 -1 mon.c@1(peon) e3 *** Got Signal Terminated *** 2026-03-09T14:38:33.271 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55117]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mon-c 2026-03-09T14:38:33.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:33 vm11 bash[17885]: cephadm 2026-03-09T14:38:31.956489+0000 mgr.y (mgr.24991) 46 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-09T14:38:33.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:33 vm11 bash[17885]: cephadm 2026-03-09T14:38:31.977971+0000 mgr.y (mgr.24991) 47 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-09T14:38:33.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:33 vm11 bash[17885]: audit 2026-03-09T14:38:32.448244+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24991 ' entity='mgr.y' 2026-03-09T14:38:33.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:33 vm11 bash[17885]: audit 2026-03-09T14:38:32.452044+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:38:33.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:33 vm11 bash[17885]: audit 2026-03-09T14:38:32.452912+0000 mon.c (mon.1) 149 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:38:33.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:33 vm11 bash[17885]: audit 2026-03-09T14:38:32.453782+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:33.568 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.568 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.568 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.568 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.568 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55173]: Error response from daemon: No such container: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mon-c 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.c.service: Deactivated successfully. 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: Stopped Ceph mon.c for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: Started Ceph mon.c for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:38:33.569 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.569 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.569 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:33 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:38:33] "GET /metrics HTTP/1.1" 200 37778 "" "Prometheus/2.51.0" 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:33 vm07 bash[52213]: [09/Mar/2026:14:38:33] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:33 vm07 bash[52213]: [09/Mar/2026:14:38:33] ENGINE Bus STOPPED 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:33 vm07 bash[52213]: [09/Mar/2026:14:38:33] ENGINE Bus STARTING 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:33 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:33 vm07 bash[52213]: [09/Mar/2026:14:38:33] ENGINE Serving on http://:::9283 2026-03-09T14:38:33.569 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:33 vm07 bash[52213]: [09/Mar/2026:14:38:33] ENGINE Bus STARTED 2026-03-09T14:38:33.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T14:38:33.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T14:38:33.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 0 pidfile_write: ignore empty --pid-file 2026-03-09T14:38:33.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 0 load: jerasure load: lrc 2026-03-09T14:38:33.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T14:38:33.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Git sha 0 2026-03-09T14:38:33.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T14:38:33.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: DB SUMMARY 2026-03-09T14:38:33.908 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: DB Session ID: TDGQ339LPZ8H921EM0TP 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 503 Bytes 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 1, files: 000018.sst 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000016.log size: 4920174 ; 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.env: 0x55af502ffdc0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.info_log: 0x55af539eb7e0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.statistics: (nil) 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.use_fsync: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.db_log_dir: 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.wal_dir: 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.write_buffer_manager: 0x55af539ef900 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.unordered_write: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.row_cache: None 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.wal_filter: None 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T14:38:33.909 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.wal_compression: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_open_files: -1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Compression algorithms supported: 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: kZSTD supported: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000009 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.merge_operator: 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compaction_filter: None 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55af539ea3c0) 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: cache_index_and_filter_blocks: 1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: pin_top_level_index_and_filter: 1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: index_type: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: data_block_index_type: 0 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: index_shortening: 1 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T14:38:33.910 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: checksum: 4 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: no_block_cache: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: block_cache: 0x55af53a11350 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: block_cache_name: BinnedLRUCache 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: block_cache_options: 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: capacity : 536870912 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: num_shard_bits : 4 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: strict_capacity_limit : 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: high_pri_pool_ratio: 0.000 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: block_cache_compressed: (nil) 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: persistent_cache: (nil) 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: block_size: 4096 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: block_size_deviation: 10 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: block_restart_interval: 16 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: index_block_restart_interval: 1 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: metadata_block_size: 4096 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: partition_filters: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: use_delta_encoding: 1 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: filter_policy: bloomfilter 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: whole_key_filtering: 1 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: verify_compression: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: read_amp_bytes_per_bit: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: format_version: 5 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: enable_index_compression: 1 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: block_align: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: max_auto_readahead_size: 262144 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: prepopulate_block_cache: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: initial_auto_readahead_size: 8192 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: num_file_reads_for_auto_readahead: 2 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compression: NoCompression 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.num_levels: 7 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T14:38:33.911 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.684+0000 7f7740eedd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.ttl: 2592000 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.688+0000 7f7740eedd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.692+0000 7f773b4c0640 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 18.sst 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.692+0000 7f7740eedd80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.692+0000 7f7740eedd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 20, last_sequence is 10294, log_number is 16,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.692+0000 7f7740eedd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 16 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.692+0000 7f7740eedd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b1d9987d-4a03-43d4-99d9-b72731908357 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.692+0000 7f7740eedd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773067113695436, "job": 1, "event": "recovery_started", "wal_files": [16]} 2026-03-09T14:38:33.912 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.692+0000 7f7740eedd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #16 mode 2 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.708+0000 7f7740eedd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773067113709911, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 21, "file_size": 2978963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10299, "largest_seqno": 11659, "table_properties": {"data_size": 2972378, "index_size": 4073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 13719, "raw_average_key_size": 24, "raw_value_size": 2959679, "raw_average_value_size": 5210, "num_data_blocks": 188, "num_entries": 568, "num_filter_entries": 568, "num_deletions": 2, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773067113, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1d9987d-4a03-43d4-99d9-b72731908357", "db_session_id": "TDGQ339LPZ8H921EM0TP", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.708+0000 7f7740eedd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773067113710054, "job": 1, "event": "recovery_finished"} 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.708+0000 7f7740eedd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 23 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.708+0000 7f7740eedd80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.708+0000 7f7740eedd80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.708+0000 7f7740eedd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55af53a12e00 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.708+0000 7f7740eedd80 4 rocksdb: DB pointer 0x55af53b1e000 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.712+0000 7f7740eedd80 0 starting mon.c rank 1 at public addrs [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] at bind addrs [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon_data /var/lib/ceph/mon/ceph-c fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.712+0000 7f7740eedd80 1 mon.c@-1(???) e3 preinit fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.712+0000 7f7740eedd80 0 mon.c@-1(???).mds e1 new map 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.712+0000 7f7740eedd80 0 mon.c@-1(???).mds e1 print_map 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: e1 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: btime 1970-01-01T00:00:00:000000+0000 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: legacy client fscid: -1 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: No filesystems configured 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.712+0000 7f7740eedd80 0 mon.c@-1(???).osd e91 crush map has features 3314933000854323200, adjusting msgr requires 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.712+0000 7f7740eedd80 0 mon.c@-1(???).osd e91 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.712+0000 7f7740eedd80 0 mon.c@-1(???).osd e91 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.712+0000 7f7740eedd80 0 mon.c@-1(???).osd e91 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T14:38:33.913 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:33 vm07 bash[55244]: debug 2026-03-09T14:38:33.712+0000 7f7740eedd80 1 mon.c@-1(???).paxosservice(auth 1..21) refresh upgraded, format 0 -> 3 2026-03-09T14:38:34.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:34 vm11 bash[41290]: ts=2026-03-09T14:38:34.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:34 vm07 bash[17480]: cluster 2026-03-09T14:38:33.803349+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:34 vm07 bash[17480]: cluster 2026-03-09T14:38:33.805430+0000 mon.a (mon.0) 948 : cluster [INF] mon.a calling monitor election 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:34 vm07 bash[17480]: cluster 2026-03-09T14:38:33.808571+0000 mon.a (mon.0) 949 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:34 vm07 bash[17480]: cluster 2026-03-09T14:38:33.814976+0000 mon.a (mon.0) 950 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],b=[v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:34 vm07 bash[17480]: cluster 2026-03-09T14:38:33.815040+0000 mon.a (mon.0) 951 : cluster [DBG] fsmap 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:34 vm07 bash[17480]: cluster 2026-03-09T14:38:33.815078+0000 mon.a (mon.0) 952 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:34 vm07 bash[17480]: cluster 2026-03-09T14:38:33.815484+0000 mon.a (mon.0) 953 : cluster [DBG] mgrmap e32: y(active, since 31s), standbys: x 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:34 vm07 bash[17480]: cluster 2026-03-09T14:38:33.823390+0000 mon.a (mon.0) 954 : cluster [INF] overall HEALTH_OK 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:34 vm07 bash[17480]: audit 2026-03-09T14:38:33.829988+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:34 vm07 bash[17480]: audit 2026-03-09T14:38:33.837311+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.803349+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.803349+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.805430+0000 mon.a (mon.0) 948 : cluster [INF] mon.a calling monitor election 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.805430+0000 mon.a (mon.0) 948 : cluster [INF] mon.a calling monitor election 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.808571+0000 mon.a (mon.0) 949 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.808571+0000 mon.a (mon.0) 949 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.814976+0000 mon.a (mon.0) 950 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],b=[v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.814976+0000 mon.a (mon.0) 950 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],b=[v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.815040+0000 mon.a (mon.0) 951 : cluster [DBG] fsmap 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.815040+0000 mon.a (mon.0) 951 : cluster [DBG] fsmap 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.815078+0000 mon.a (mon.0) 952 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.815078+0000 mon.a (mon.0) 952 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.815484+0000 mon.a (mon.0) 953 : cluster [DBG] mgrmap e32: y(active, since 31s), standbys: x 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.815484+0000 mon.a (mon.0) 953 : cluster [DBG] mgrmap e32: y(active, since 31s), standbys: x 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.823390+0000 mon.a (mon.0) 954 : cluster [INF] overall HEALTH_OK 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: cluster 2026-03-09T14:38:33.823390+0000 mon.a (mon.0) 954 : cluster [INF] overall HEALTH_OK 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: audit 2026-03-09T14:38:33.829988+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: audit 2026-03-09T14:38:33.829988+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: audit 2026-03-09T14:38:33.837311+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:35.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:34 vm07 bash[55244]: audit 2026-03-09T14:38:33.837311+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:35.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:34 vm11 bash[17885]: cluster 2026-03-09T14:38:33.803349+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T14:38:35.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:34 vm11 bash[17885]: cluster 2026-03-09T14:38:33.805430+0000 mon.a (mon.0) 948 : cluster [INF] mon.a calling monitor election 2026-03-09T14:38:35.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:34 vm11 bash[17885]: cluster 2026-03-09T14:38:33.808571+0000 mon.a (mon.0) 949 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:38:35.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:34 vm11 bash[17885]: cluster 2026-03-09T14:38:33.814976+0000 mon.a (mon.0) 950 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],b=[v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0],c=[v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0]} 2026-03-09T14:38:35.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:34 vm11 bash[17885]: cluster 2026-03-09T14:38:33.815040+0000 mon.a (mon.0) 951 : cluster [DBG] fsmap 2026-03-09T14:38:35.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:34 vm11 bash[17885]: cluster 2026-03-09T14:38:33.815078+0000 mon.a (mon.0) 952 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:35.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:34 vm11 bash[17885]: cluster 2026-03-09T14:38:33.815484+0000 mon.a (mon.0) 953 : cluster [DBG] mgrmap e32: y(active, since 31s), standbys: x 2026-03-09T14:38:35.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:34 vm11 bash[17885]: cluster 2026-03-09T14:38:33.823390+0000 mon.a (mon.0) 954 : cluster [INF] overall HEALTH_OK 2026-03-09T14:38:35.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:34 vm11 bash[17885]: audit 2026-03-09T14:38:33.829988+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:35.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:34 vm11 bash[17885]: audit 2026-03-09T14:38:33.837311+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:36.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:35 vm07 bash[55244]: cluster 2026-03-09T14:38:35.056100+0000 mgr.y (mgr.24991) 51 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:36.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:35 vm07 bash[55244]: cluster 2026-03-09T14:38:35.056100+0000 mgr.y (mgr.24991) 51 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:36.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:35 vm07 bash[17480]: cluster 2026-03-09T14:38:35.056100+0000 mgr.y (mgr.24991) 51 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:36.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:35 vm11 bash[17885]: cluster 2026-03-09T14:38:35.056100+0000 mgr.y (mgr.24991) 51 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:37.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:36 vm11 bash[41290]: ts=2026-03-09T14:38:36.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:38:38.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:38 vm07 bash[55244]: cluster 2026-03-09T14:38:37.056403+0000 mgr.y (mgr.24991) 52 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:38.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:38 vm07 bash[55244]: cluster 2026-03-09T14:38:37.056403+0000 mgr.y (mgr.24991) 52 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:38.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:38 vm07 bash[17480]: cluster 2026-03-09T14:38:37.056403+0000 mgr.y (mgr.24991) 52 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:38.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:38 vm11 bash[17885]: cluster 2026-03-09T14:38:37.056403+0000 mgr.y (mgr.24991) 52 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:39.114 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:38 vm07 bash[52213]: debug 2026-03-09T14:38:38.716+0000 7fde1338d640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-09T14:38:39.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:39 vm11 bash[17885]: audit 2026-03-09T14:38:37.439030+0000 mgr.y (mgr.24991) 53 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:39.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:39 vm07 bash[55244]: audit 2026-03-09T14:38:37.439030+0000 mgr.y (mgr.24991) 53 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:39.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:39 vm07 bash[55244]: audit 2026-03-09T14:38:37.439030+0000 mgr.y (mgr.24991) 53 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:39.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:39 vm07 bash[17480]: audit 2026-03-09T14:38:37.439030+0000 mgr.y (mgr.24991) 53 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:40.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: cluster 2026-03-09T14:38:39.056750+0000 mgr.y (mgr.24991) 54 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:40.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: cluster 2026-03-09T14:38:39.056750+0000 mgr.y (mgr.24991) 54 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:40.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.153218+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.153218+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.164927+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.164927+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.265044+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.265044+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.270344+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.270344+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.833206+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.833206+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.838602+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:40 vm07 bash[55244]: audit 2026-03-09T14:38:39.838602+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:40 vm07 bash[17480]: cluster 2026-03-09T14:38:39.056750+0000 mgr.y (mgr.24991) 54 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:40 vm07 bash[17480]: audit 2026-03-09T14:38:39.153218+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:40 vm07 bash[17480]: audit 2026-03-09T14:38:39.164927+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:40 vm07 bash[17480]: audit 2026-03-09T14:38:39.265044+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:40 vm07 bash[17480]: audit 2026-03-09T14:38:39.270344+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:40 vm07 bash[17480]: audit 2026-03-09T14:38:39.833206+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:40 vm07 bash[17480]: audit 2026-03-09T14:38:39.838602+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:40 vm11 bash[17885]: cluster 2026-03-09T14:38:39.056750+0000 mgr.y (mgr.24991) 54 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:40.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:40 vm11 bash[17885]: audit 2026-03-09T14:38:39.153218+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:40 vm11 bash[17885]: audit 2026-03-09T14:38:39.164927+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:40 vm11 bash[17885]: audit 2026-03-09T14:38:39.265044+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:40 vm11 bash[17885]: audit 2026-03-09T14:38:39.270344+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:40 vm11 bash[17885]: audit 2026-03-09T14:38:39.833206+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:40.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:40 vm11 bash[17885]: audit 2026-03-09T14:38:39.838602+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:42.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:42 vm07 bash[55244]: cluster 2026-03-09T14:38:41.057251+0000 mgr.y (mgr.24991) 55 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:42.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:42 vm07 bash[55244]: cluster 2026-03-09T14:38:41.057251+0000 mgr.y (mgr.24991) 55 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:42.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:42 vm07 bash[17480]: cluster 2026-03-09T14:38:41.057251+0000 mgr.y (mgr.24991) 55 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:42.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:42 vm11 bash[17885]: cluster 2026-03-09T14:38:41.057251+0000 mgr.y (mgr.24991) 55 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:43.906 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:43 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:38:43] "GET /metrics HTTP/1.1" 200 37778 "" "Prometheus/2.51.0" 2026-03-09T14:38:44.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:44 vm07 bash[55244]: cluster 2026-03-09T14:38:43.057568+0000 mgr.y (mgr.24991) 56 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:44.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:44 vm07 bash[55244]: cluster 2026-03-09T14:38:43.057568+0000 mgr.y (mgr.24991) 56 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:44.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:44 vm07 bash[17480]: cluster 2026-03-09T14:38:43.057568+0000 mgr.y (mgr.24991) 56 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:44.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:44 vm11 bash[41290]: ts=2026-03-09T14:38:44.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:38:44.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:44 vm11 bash[17885]: cluster 2026-03-09T14:38:43.057568+0000 mgr.y (mgr.24991) 56 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:46.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:46 vm07 bash[55244]: cluster 2026-03-09T14:38:45.058180+0000 mgr.y (mgr.24991) 57 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:46.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:46 vm07 bash[55244]: cluster 2026-03-09T14:38:45.058180+0000 mgr.y (mgr.24991) 57 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:46.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:46 vm07 bash[17480]: cluster 2026-03-09T14:38:45.058180+0000 mgr.y (mgr.24991) 57 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:46.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:46 vm11 bash[17885]: cluster 2026-03-09T14:38:45.058180+0000 mgr.y (mgr.24991) 57 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:38:47.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:46 vm11 bash[41290]: ts=2026-03-09T14:38:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:38:47.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.349 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.349 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.349 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.349 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.350 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.350 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.350 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.350 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.607 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.351251+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.607 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.351251+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.607 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.356989+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.356989+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.357668+0000 mon.a (mon.0) 965 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.357668+0000 mon.a (mon.0) 965 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.358223+0000 mon.a (mon.0) 966 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.358223+0000 mon.a (mon.0) 966 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.362645+0000 mon.a (mon.0) 967 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.362645+0000 mon.a (mon.0) 967 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.402049+0000 mon.a (mon.0) 968 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.402049+0000 mon.a (mon.0) 968 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.403163+0000 mon.a (mon.0) 969 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.403163+0000 mon.a (mon.0) 969 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.403819+0000 mon.a (mon.0) 970 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.403819+0000 mon.a (mon.0) 970 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.404294+0000 mon.a (mon.0) 971 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.404294+0000 mon.a (mon.0) 971 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: cephadm 2026-03-09T14:38:46.404593+0000 mgr.y (mgr.24991) 58 : cephadm [INF] Upgrade: It appears safe to stop mon.a 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: cephadm 2026-03-09T14:38:46.404593+0000 mgr.y (mgr.24991) 58 : cephadm [INF] Upgrade: It appears safe to stop mon.a 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.823267+0000 mon.a (mon.0) 972 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.823267+0000 mon.a (mon.0) 972 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.824010+0000 mon.a (mon.0) 973 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.824010+0000 mon.a (mon.0) 973 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.824464+0000 mon.a (mon.0) 974 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.824464+0000 mon.a (mon.0) 974 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.824941+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 bash[55244]: audit 2026-03-09T14:38:46.824941+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:47 vm07 bash[52213]: [09/Mar/2026:14:38:47] ENGINE Bus STOPPING 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:47 vm07 bash[52213]: [09/Mar/2026:14:38:47] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:47 vm07 bash[52213]: [09/Mar/2026:14:38:47] ENGINE Bus STOPPED 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:47 vm07 bash[52213]: [09/Mar/2026:14:38:47] ENGINE Bus STARTING 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:47 vm07 bash[52213]: [09/Mar/2026:14:38:47] ENGINE Serving on http://:::9283 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:47 vm07 bash[52213]: [09/Mar/2026:14:38:47] ENGINE Bus STARTED 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: Stopping Ceph mon.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.351251+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.356989+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.357668+0000 mon.a (mon.0) 965 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.358223+0000 mon.a (mon.0) 966 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.362645+0000 mon.a (mon.0) 967 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.402049+0000 mon.a (mon.0) 968 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.403163+0000 mon.a (mon.0) 969 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.403819+0000 mon.a (mon.0) 970 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.404294+0000 mon.a (mon.0) 971 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: cephadm 2026-03-09T14:38:46.404593+0000 mgr.y (mgr.24991) 58 : cephadm [INF] Upgrade: It appears safe to stop mon.a 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.823267+0000 mon.a (mon.0) 972 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.824010+0000 mon.a (mon.0) 973 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.824464+0000 mon.a (mon.0) 974 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: audit 2026-03-09T14:38:46.824941+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: debug 2026-03-09T14:38:47.388+0000 7fb18ff4e700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T14:38:47.608 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[17480]: debug 2026-03-09T14:38:47.388+0000 7fb18ff4e700 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.351251+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.356989+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.357668+0000 mon.a (mon.0) 965 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.358223+0000 mon.a (mon.0) 966 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.362645+0000 mon.a (mon.0) 967 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.402049+0000 mon.a (mon.0) 968 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.403163+0000 mon.a (mon.0) 969 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.403819+0000 mon.a (mon.0) 970 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.404294+0000 mon.a (mon.0) 971 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: cephadm 2026-03-09T14:38:46.404593+0000 mgr.y (mgr.24991) 58 : cephadm [INF] Upgrade: It appears safe to stop mon.a 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.823267+0000 mon.a (mon.0) 972 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.824010+0000 mon.a (mon.0) 973 : audit [INF] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.824464+0000 mon.a (mon.0) 974 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:38:47.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:47 vm11 bash[17885]: audit 2026-03-09T14:38:46.824941+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:38:47.859 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56189]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mon-a 2026-03-09T14:38:47.859 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.a.service: Deactivated successfully. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: Stopped Ceph mon.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: Started Ceph mon.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:47.859 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:38:47 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:38:48.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.928+0000 7f49c33bbd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T14:38:48.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.928+0000 7f49c33bbd80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T14:38:48.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.928+0000 7f49c33bbd80 0 pidfile_write: ignore empty --pid-file 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.928+0000 7f49c33bbd80 0 load: jerasure load: lrc 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Git sha 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: DB SUMMARY 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: DB Session ID: U704I6XYQQXXEA8UXCHB 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: MANIFEST file: MANIFEST-000015 size: 766 Bytes 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000027.sst 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000025.log size: 83978 ; 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.env: 0x559393d91dc0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.info_log: 0x5593b84cd7e0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.statistics: (nil) 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.use_fsync: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.db_log_dir: 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.wal_dir: 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.write_buffer_manager: 0x5593b84d1900 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.unordered_write: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.row_cache: None 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.wal_filter: None 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T14:38:48.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.wal_compression: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_open_files: -1 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Compression algorithms supported: 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: kZSTD supported: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000015 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.merge_operator: 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_filter: None 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5593b84cc3c0) 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: cache_index_and_filter_blocks: 1 2026-03-09T14:38:48.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: pin_top_level_index_and_filter: 1 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: index_type: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: data_block_index_type: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: index_shortening: 1 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: checksum: 4 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: no_block_cache: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: block_cache: 0x5593b84f3350 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: block_cache_name: BinnedLRUCache 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: block_cache_options: 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: capacity : 536870912 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: num_shard_bits : 4 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: strict_capacity_limit : 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: high_pri_pool_ratio: 0.000 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: block_cache_compressed: (nil) 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: persistent_cache: (nil) 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: block_size: 4096 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: block_size_deviation: 10 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: block_restart_interval: 16 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: index_block_restart_interval: 1 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: metadata_block_size: 4096 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: partition_filters: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: use_delta_encoding: 1 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: filter_policy: bloomfilter 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: whole_key_filtering: 1 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: verify_compression: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: read_amp_bytes_per_bit: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: format_version: 5 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: enable_index_compression: 1 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: block_align: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: max_auto_readahead_size: 262144 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: prepopulate_block_cache: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: initial_auto_readahead_size: 8192 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: num_file_reads_for_auto_readahead: 2 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compression: NoCompression 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.num_levels: 7 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T14:38:48.159 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.ttl: 2592000 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.932+0000 7f49c33bbd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.936+0000 7f49c0193640 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 27.sst 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.936+0000 7f49c33bbd80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.936+0000 7f49c33bbd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000015 succeeded,manifest_file_number is 15, next_file_number is 29, last_sequence is 11398, log_number is 25,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.936+0000 7f49c33bbd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 25 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.936+0000 7f49c33bbd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a15f1eb3-64c5-40cb-954e-7e6d47d8bfb6 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.936+0000 7f49c33bbd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773067127940968, "job": 1, "event": "recovery_started", "wal_files": [25]} 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.936+0000 7f49c33bbd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #25 mode 2 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.940+0000 7f49c33bbd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773067127942622, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 30, "file_size": 84665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11390, "largest_seqno": 11456, "table_properties": {"data_size": 83235, "index_size": 269, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 133, "raw_key_size": 1195, "raw_average_key_size": 26, "raw_value_size": 82080, "raw_average_value_size": 1824, "num_data_blocks": 10, "num_entries": 45, "num_filter_entries": 45, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773067127, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a15f1eb3-64c5-40cb-954e-7e6d47d8bfb6", "db_session_id": "U704I6XYQQXXEA8UXCHB", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} 2026-03-09T14:38:48.160 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.940+0000 7f49c33bbd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773067127942715, "job": 1, "event": "recovery_finished"} 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.940+0000 7f49c33bbd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 32 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.940+0000 7f49c33bbd80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5593b84f4e00 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 4 rocksdb: DB pointer 0x5593b8600000 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] at bind addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 1 mon.a@-1(???) e3 preinit fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 0 mon.a@-1(???).mds e1 new map 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 0 mon.a@-1(???).mds e1 print_map 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: e1 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: btime 1970-01-01T00:00:00:000000+0000 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: legacy client fscid: -1 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: No filesystems configured 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 0 mon.a@-1(???).osd e91 crush map has features 3314933000854323200, adjusting msgr requires 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 0 mon.a@-1(???).osd e91 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 0 mon.a@-1(???).osd e91 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 0 mon.a@-1(???).osd e91 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T14:38:48.161 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:47 vm07 bash[56315]: debug 2026-03-09T14:38:47.944+0000 7f49c33bbd80 1 mon.a@-1(???).paxosservice(auth 1..22) refresh upgraded, format 0 -> 3 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cephadm 2026-03-09T14:38:46.825788+0000 mgr.y (mgr.24991) 60 : cephadm [INF] Deploying daemon mon.a on vm07 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cephadm 2026-03-09T14:38:46.825788+0000 mgr.y (mgr.24991) 60 : cephadm [INF] Deploying daemon mon.a on vm07 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:47.058500+0000 mgr.y (mgr.24991) 61 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:47.058500+0000 mgr.y (mgr.24991) 61 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:47.418804+0000 mon.c (mon.1) 2 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:47.418804+0000 mon.c (mon.1) 2 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:47.419146+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:47.419146+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:47.419448+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:47.419448+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:47.446109+0000 mgr.y (mgr.24991) 62 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:47.446109+0000 mgr.y (mgr.24991) 62 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:48.152871+0000 mon.a (mon.0) 1 : cluster [INF] mon.a calling monitor election 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:48.152871+0000 mon.a (mon.0) 1 : cluster [INF] mon.a calling monitor election 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:48.336629+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:48.336629+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.356531+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.356531+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361484+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 3 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361484+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 3 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361528+0000 mon.a (mon.0) 4 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361528+0000 mon.a (mon.0) 4 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361537+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-09T14:29:59.044579+0000 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361537+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-09T14:29:59.044579+0000 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361545+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361545+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361553+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 17 (quincy) 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361553+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 17 (quincy) 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361564+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361564+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361605+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361605+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361614+0000 mon.a (mon.0) 10 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361614+0000 mon.a (mon.0) 10 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361623+0000 mon.a (mon.0) 11 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.361623+0000 mon.a (mon.0) 11 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.362216+0000 mon.a (mon.0) 12 : cluster [DBG] fsmap 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.362216+0000 mon.a (mon.0) 12 : cluster [DBG] fsmap 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.362265+0000 mon.a (mon.0) 13 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.362265+0000 mon.a (mon.0) 13 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.363425+0000 mon.a (mon.0) 14 : cluster [DBG] mgrmap e32: y(active, since 47s), standbys: x 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.363425+0000 mon.a (mon.0) 14 : cluster [DBG] mgrmap e32: y(active, since 47s), standbys: x 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.363840+0000 mon.a (mon.0) 15 : cluster [INF] overall HEALTH_OK 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.363840+0000 mon.a (mon.0) 15 : cluster [INF] overall HEALTH_OK 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:49.374523+0000 mon.a (mon.0) 16 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: audit 2026-03-09T14:38:49.374523+0000 mon.a (mon.0) 16 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.374873+0000 mon.a (mon.0) 17 : cluster [DBG] mgrmap e33: y(active, since 47s), standbys: x 2026-03-09T14:38:49.661 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:49 vm07 bash[55244]: cluster 2026-03-09T14:38:49.374873+0000 mon.a (mon.0) 17 : cluster [DBG] mgrmap e33: y(active, since 47s), standbys: x 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cephadm 2026-03-09T14:38:46.825788+0000 mgr.y (mgr.24991) 60 : cephadm [INF] Deploying daemon mon.a on vm07 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cephadm 2026-03-09T14:38:46.825788+0000 mgr.y (mgr.24991) 60 : cephadm [INF] Deploying daemon mon.a on vm07 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:47.058500+0000 mgr.y (mgr.24991) 61 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:47.058500+0000 mgr.y (mgr.24991) 61 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:47.418804+0000 mon.c (mon.1) 2 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:47.418804+0000 mon.c (mon.1) 2 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:47.419146+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:47.419146+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:47.419448+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:47.419448+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:47.446109+0000 mgr.y (mgr.24991) 62 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:47.446109+0000 mgr.y (mgr.24991) 62 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:48.152871+0000 mon.a (mon.0) 1 : cluster [INF] mon.a calling monitor election 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:48.152871+0000 mon.a (mon.0) 1 : cluster [INF] mon.a calling monitor election 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:48.336629+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:48.336629+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.356531+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.356531+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361484+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 3 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361484+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 3 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361528+0000 mon.a (mon.0) 4 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361528+0000 mon.a (mon.0) 4 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361537+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-09T14:29:59.044579+0000 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361537+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-09T14:29:59.044579+0000 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361545+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361545+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361553+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 17 (quincy) 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361553+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 17 (quincy) 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361564+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361564+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361605+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361605+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361614+0000 mon.a (mon.0) 10 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361614+0000 mon.a (mon.0) 10 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361623+0000 mon.a (mon.0) 11 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.361623+0000 mon.a (mon.0) 11 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.362216+0000 mon.a (mon.0) 12 : cluster [DBG] fsmap 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.362216+0000 mon.a (mon.0) 12 : cluster [DBG] fsmap 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.362265+0000 mon.a (mon.0) 13 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.362265+0000 mon.a (mon.0) 13 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.363425+0000 mon.a (mon.0) 14 : cluster [DBG] mgrmap e32: y(active, since 47s), standbys: x 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.363425+0000 mon.a (mon.0) 14 : cluster [DBG] mgrmap e32: y(active, since 47s), standbys: x 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.363840+0000 mon.a (mon.0) 15 : cluster [INF] overall HEALTH_OK 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.363840+0000 mon.a (mon.0) 15 : cluster [INF] overall HEALTH_OK 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:49.374523+0000 mon.a (mon.0) 16 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: audit 2026-03-09T14:38:49.374523+0000 mon.a (mon.0) 16 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.374873+0000 mon.a (mon.0) 17 : cluster [DBG] mgrmap e33: y(active, since 47s), standbys: x 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:49 vm07 bash[56315]: cluster 2026-03-09T14:38:49.374873+0000 mon.a (mon.0) 17 : cluster [DBG] mgrmap e33: y(active, since 47s), standbys: x 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:49 vm07 bash[52213]: ignoring --setuser ceph since I am not root 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:49 vm07 bash[52213]: ignoring --setgroup ceph since I am not root 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:49 vm07 bash[52213]: debug 2026-03-09T14:38:49.456+0000 7efe45d75640 1 -- 192.168.123.107:0/4012745193 <== mon.1 v2:192.168.123.107:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x563124edf4a0 con 0x563124ee1800 2026-03-09T14:38:49.662 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:49 vm07 bash[52213]: debug 2026-03-09T14:38:49.516+0000 7efe485d2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:38:49.663 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:49 vm07 bash[52213]: debug 2026-03-09T14:38:49.548+0000 7efe485d2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:38:49.753 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:49 vm11 bash[41682]: ignoring --setuser ceph since I am not root 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:49 vm11 bash[41682]: ignoring --setgroup ceph since I am not root 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:49 vm11 bash[41682]: debug 2026-03-09T14:38:49.486+0000 7fa7f42f0140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:49 vm11 bash[41682]: debug 2026-03-09T14:38:49.522+0000 7fa7f42f0140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:49 vm11 bash[41682]: debug 2026-03-09T14:38:49.634+0000 7fa7f42f0140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cephadm 2026-03-09T14:38:46.825788+0000 mgr.y (mgr.24991) 60 : cephadm [INF] Deploying daemon mon.a on vm07 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:47.058500+0000 mgr.y (mgr.24991) 61 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: audit 2026-03-09T14:38:47.418804+0000 mon.c (mon.1) 2 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: audit 2026-03-09T14:38:47.419146+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: audit 2026-03-09T14:38:47.419448+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: audit 2026-03-09T14:38:47.446109+0000 mgr.y (mgr.24991) 62 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:48.152871+0000 mon.a (mon.0) 1 : cluster [INF] mon.a calling monitor election 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: audit 2026-03-09T14:38:48.336629+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.24991 192.168.123.107:0/128696238' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.356531+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.361484+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 3 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.361528+0000 mon.a (mon.0) 4 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.361537+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-09T14:29:59.044579+0000 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.361545+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.361553+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 17 (quincy) 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.361564+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.361605+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.361614+0000 mon.a (mon.0) 10 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.361623+0000 mon.a (mon.0) 11 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.362216+0000 mon.a (mon.0) 12 : cluster [DBG] fsmap 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.362265+0000 mon.a (mon.0) 13 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.363425+0000 mon.a (mon.0) 14 : cluster [DBG] mgrmap e32: y(active, since 47s), standbys: x 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.363840+0000 mon.a (mon.0) 15 : cluster [INF] overall HEALTH_OK 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: audit 2026-03-09T14:38:49.374523+0000 mon.a (mon.0) 16 : audit [INF] from='mgr.24991 ' entity='' 2026-03-09T14:38:49.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:49 vm11 bash[17885]: cluster 2026-03-09T14:38:49.374873+0000 mon.a (mon.0) 17 : cluster [DBG] mgrmap e33: y(active, since 47s), standbys: x 2026-03-09T14:38:49.931 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:49 vm07 bash[52213]: debug 2026-03-09T14:38:49.664+0000 7efe485d2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:38:50.253 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:49 vm11 bash[41682]: debug 2026-03-09T14:38:49.910+0000 7fa7f42f0140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:38:50.402 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:49 vm07 bash[52213]: debug 2026-03-09T14:38:49.936+0000 7efe485d2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:38:50.656 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: debug 2026-03-09T14:38:50.404+0000 7efe485d2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:38:50.656 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: debug 2026-03-09T14:38:50.492+0000 7efe485d2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:38:50.656 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:38:50.656 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:38:50.656 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: from numpy import show_config as show_numpy_config 2026-03-09T14:38:50.656 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: debug 2026-03-09T14:38:50.612+0000 7efe485d2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:38:50.753 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: debug 2026-03-09T14:38:50.406+0000 7fa7f42f0140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:38:50.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: debug 2026-03-09T14:38:50.502+0000 7fa7f42f0140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:38:50.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:38:50.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:38:50.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: from numpy import show_config as show_numpy_config 2026-03-09T14:38:50.754 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: debug 2026-03-09T14:38:50.626+0000 7fa7f42f0140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:38:51.156 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: debug 2026-03-09T14:38:50.744+0000 7efe485d2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:38:51.156 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: debug 2026-03-09T14:38:50.780+0000 7efe485d2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:38:51.156 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: debug 2026-03-09T14:38:50.816+0000 7efe485d2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:38:51.156 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: debug 2026-03-09T14:38:50.860+0000 7efe485d2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:38:51.156 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:50 vm07 bash[52213]: debug 2026-03-09T14:38:50.908+0000 7efe485d2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:38:51.253 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: debug 2026-03-09T14:38:50.754+0000 7fa7f42f0140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:38:51.253 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: debug 2026-03-09T14:38:50.794+0000 7fa7f42f0140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:38:51.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: debug 2026-03-09T14:38:50.830+0000 7fa7f42f0140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:38:51.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: debug 2026-03-09T14:38:50.870+0000 7fa7f42f0140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:38:51.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:50 vm11 bash[41682]: debug 2026-03-09T14:38:50.914+0000 7fa7f42f0140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:38:51.573 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:51 vm07 bash[55244]: cluster 2026-03-09T14:38:50.382732+0000 mon.a (mon.0) 18 : cluster [DBG] mgrmap e34: y(active, since 48s), standbys: x 2026-03-09T14:38:51.574 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:51 vm07 bash[55244]: cluster 2026-03-09T14:38:50.382732+0000 mon.a (mon.0) 18 : cluster [DBG] mgrmap e34: y(active, since 48s), standbys: x 2026-03-09T14:38:51.574 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:51 vm07 bash[56315]: cluster 2026-03-09T14:38:50.382732+0000 mon.a (mon.0) 18 : cluster [DBG] mgrmap e34: y(active, since 48s), standbys: x 2026-03-09T14:38:51.574 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:51 vm07 bash[56315]: cluster 2026-03-09T14:38:50.382732+0000 mon.a (mon.0) 18 : cluster [DBG] mgrmap e34: y(active, since 48s), standbys: x 2026-03-09T14:38:51.574 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:51 vm07 bash[52213]: debug 2026-03-09T14:38:51.324+0000 7efe485d2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:38:51.574 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:51 vm07 bash[52213]: debug 2026-03-09T14:38:51.360+0000 7efe485d2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:38:51.574 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:51 vm07 bash[52213]: debug 2026-03-09T14:38:51.400+0000 7efe485d2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:38:51.574 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:51 vm07 bash[52213]: debug 2026-03-09T14:38:51.536+0000 7efe485d2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:38:51.585 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:51 vm11 bash[41682]: debug 2026-03-09T14:38:51.334+0000 7fa7f42f0140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:38:51.585 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:51 vm11 bash[41682]: debug 2026-03-09T14:38:51.370+0000 7fa7f42f0140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:38:51.585 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:51 vm11 bash[41682]: debug 2026-03-09T14:38:51.406+0000 7fa7f42f0140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:38:51.585 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:51 vm11 bash[41682]: debug 2026-03-09T14:38:51.542+0000 7fa7f42f0140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:38:51.585 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:51 vm11 bash[17885]: cluster 2026-03-09T14:38:50.382732+0000 mon.a (mon.0) 18 : cluster [DBG] mgrmap e34: y(active, since 48s), standbys: x 2026-03-09T14:38:51.870 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:51 vm07 bash[52213]: debug 2026-03-09T14:38:51.576+0000 7efe485d2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:38:51.870 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:51 vm07 bash[52213]: debug 2026-03-09T14:38:51.616+0000 7efe485d2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:38:51.870 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:51 vm07 bash[52213]: debug 2026-03-09T14:38:51.724+0000 7efe485d2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:38:51.881 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:51 vm11 bash[41682]: debug 2026-03-09T14:38:51.586+0000 7fa7f42f0140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:38:51.881 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:51 vm11 bash[41682]: debug 2026-03-09T14:38:51.622+0000 7fa7f42f0140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:38:51.881 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:51 vm11 bash[41682]: debug 2026-03-09T14:38:51.730+0000 7fa7f42f0140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:38:52.156 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:51 vm07 bash[52213]: debug 2026-03-09T14:38:51.872+0000 7efe485d2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:38:52.156 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:52 vm07 bash[52213]: debug 2026-03-09T14:38:52.040+0000 7efe485d2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:38:52.156 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:52 vm07 bash[52213]: debug 2026-03-09T14:38:52.072+0000 7efe485d2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:38:52.156 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:52 vm07 bash[52213]: debug 2026-03-09T14:38:52.116+0000 7efe485d2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:38:52.253 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:51 vm11 bash[41682]: debug 2026-03-09T14:38:51.882+0000 7fa7f42f0140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:38:52.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:52 vm11 bash[41682]: debug 2026-03-09T14:38:52.046+0000 7fa7f42f0140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:38:52.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:52 vm11 bash[41682]: debug 2026-03-09T14:38:52.082+0000 7fa7f42f0140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:38:52.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:52 vm11 bash[41682]: debug 2026-03-09T14:38:52.122+0000 7fa7f42f0140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:38:52.543 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:52 vm07 bash[52213]: debug 2026-03-09T14:38:52.264+0000 7efe485d2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:38:52.543 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:52 vm07 bash[52213]: debug 2026-03-09T14:38:52.484+0000 7efe485d2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:38:52.543 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:52 vm11 bash[41682]: debug 2026-03-09T14:38:52.286+0000 7fa7f42f0140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:38:52.543 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:52 vm11 bash[41682]: debug 2026-03-09T14:38:52.526+0000 7fa7f42f0140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:38:52.543 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:52 vm11 bash[41682]: [09/Mar/2026:14:38:52] ENGINE Bus STARTING 2026-03-09T14:38:52.543 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:52 vm11 bash[41682]: CherryPy Checker: 2026-03-09T14:38:52.543 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:52 vm11 bash[41682]: The Application mounted at '' has an empty config. 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.490754+0000 mon.a (mon.0) 19 : cluster [INF] Active manager daemon y restarted 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.490754+0000 mon.a (mon.0) 19 : cluster [INF] Active manager daemon y restarted 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.491007+0000 mon.a (mon.0) 20 : cluster [INF] Activating manager daemon y 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.491007+0000 mon.a (mon.0) 20 : cluster [INF] Activating manager daemon y 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.498662+0000 mon.a (mon.0) 21 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.498662+0000 mon.a (mon.0) 21 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.499112+0000 mon.a (mon.0) 22 : cluster [DBG] mgrmap e35: y(active, starting, since 0.00819652s), standbys: x 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.499112+0000 mon.a (mon.0) 22 : cluster [DBG] mgrmap e35: y(active, starting, since 0.00819652s), standbys: x 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.517466+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.517466+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.517562+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.517562+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.517630+0000 mon.a (mon.0) 25 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.517630+0000 mon.a (mon.0) 25 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.518871+0000 mon.a (mon.0) 26 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.518871+0000 mon.a (mon.0) 26 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.518966+0000 mon.a (mon.0) 27 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.518966+0000 mon.a (mon.0) 27 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519075+0000 mon.a (mon.0) 28 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519075+0000 mon.a (mon.0) 28 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519186+0000 mon.a (mon.0) 29 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519186+0000 mon.a (mon.0) 29 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519290+0000 mon.a (mon.0) 30 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519290+0000 mon.a (mon.0) 30 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519402+0000 mon.a (mon.0) 31 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519402+0000 mon.a (mon.0) 31 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519503+0000 mon.a (mon.0) 32 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519503+0000 mon.a (mon.0) 32 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519607+0000 mon.a (mon.0) 33 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519607+0000 mon.a (mon.0) 33 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519707+0000 mon.a (mon.0) 34 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519707+0000 mon.a (mon.0) 34 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519806+0000 mon.a (mon.0) 35 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.519806+0000 mon.a (mon.0) 35 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.520830+0000 mon.a (mon.0) 36 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.520830+0000 mon.a (mon.0) 36 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.520996+0000 mon.a (mon.0) 37 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.520996+0000 mon.a (mon.0) 37 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.521264+0000 mon.a (mon.0) 38 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.521264+0000 mon.a (mon.0) 38 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.529448+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.529448+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.536845+0000 mon.a (mon.0) 40 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.536845+0000 mon.a (mon.0) 40 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.536944+0000 mon.a (mon.0) 41 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: cluster 2026-03-09T14:38:52.536944+0000 mon.a (mon.0) 41 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.537071+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:52.818 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.537071+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.537753+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.537753+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.538958+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:52 vm07 bash[55244]: audit 2026-03-09T14:38:52.538958+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.490754+0000 mon.a (mon.0) 19 : cluster [INF] Active manager daemon y restarted 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.490754+0000 mon.a (mon.0) 19 : cluster [INF] Active manager daemon y restarted 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.491007+0000 mon.a (mon.0) 20 : cluster [INF] Activating manager daemon y 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.491007+0000 mon.a (mon.0) 20 : cluster [INF] Activating manager daemon y 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.498662+0000 mon.a (mon.0) 21 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.498662+0000 mon.a (mon.0) 21 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.499112+0000 mon.a (mon.0) 22 : cluster [DBG] mgrmap e35: y(active, starting, since 0.00819652s), standbys: x 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.499112+0000 mon.a (mon.0) 22 : cluster [DBG] mgrmap e35: y(active, starting, since 0.00819652s), standbys: x 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.517466+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.517466+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.517562+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.517562+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.517630+0000 mon.a (mon.0) 25 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.517630+0000 mon.a (mon.0) 25 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.518871+0000 mon.a (mon.0) 26 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.518871+0000 mon.a (mon.0) 26 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.518966+0000 mon.a (mon.0) 27 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.518966+0000 mon.a (mon.0) 27 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519075+0000 mon.a (mon.0) 28 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519075+0000 mon.a (mon.0) 28 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519186+0000 mon.a (mon.0) 29 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519186+0000 mon.a (mon.0) 29 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519290+0000 mon.a (mon.0) 30 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519290+0000 mon.a (mon.0) 30 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519402+0000 mon.a (mon.0) 31 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519402+0000 mon.a (mon.0) 31 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519503+0000 mon.a (mon.0) 32 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519503+0000 mon.a (mon.0) 32 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519607+0000 mon.a (mon.0) 33 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519607+0000 mon.a (mon.0) 33 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519707+0000 mon.a (mon.0) 34 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519707+0000 mon.a (mon.0) 34 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519806+0000 mon.a (mon.0) 35 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.519806+0000 mon.a (mon.0) 35 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.520830+0000 mon.a (mon.0) 36 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.520830+0000 mon.a (mon.0) 36 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.520996+0000 mon.a (mon.0) 37 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.520996+0000 mon.a (mon.0) 37 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.521264+0000 mon.a (mon.0) 38 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.521264+0000 mon.a (mon.0) 38 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.529448+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.529448+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.536845+0000 mon.a (mon.0) 40 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.536845+0000 mon.a (mon.0) 40 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.536944+0000 mon.a (mon.0) 41 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: cluster 2026-03-09T14:38:52.536944+0000 mon.a (mon.0) 41 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.537071+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.537071+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.537753+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:52.819 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.537753+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:52.820 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.538958+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:52.820 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:52 vm07 bash[56315]: audit 2026-03-09T14:38:52.538958+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:52.820 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:52 vm07 bash[52213]: [09/Mar/2026:14:38:52] ENGINE Bus STARTING 2026-03-09T14:38:52.820 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:52 vm07 bash[52213]: CherryPy Checker: 2026-03-09T14:38:52.820 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:52 vm07 bash[52213]: The Application mounted at '' has an empty config. 2026-03-09T14:38:52.820 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:52 vm07 bash[52213]: [09/Mar/2026:14:38:52] ENGINE Serving on http://:::9283 2026-03-09T14:38:52.820 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:52 vm07 bash[52213]: [09/Mar/2026:14:38:52] ENGINE Bus STARTED 2026-03-09T14:38:52.828 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:52 vm11 bash[41682]: [09/Mar/2026:14:38:52] ENGINE Serving on http://:::9283 2026-03-09T14:38:52.828 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:38:52 vm11 bash[41682]: [09/Mar/2026:14:38:52] ENGINE Bus STARTED 2026-03-09T14:38:52.828 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: cluster 2026-03-09T14:38:52.490754+0000 mon.a (mon.0) 19 : cluster [INF] Active manager daemon y restarted 2026-03-09T14:38:52.828 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: cluster 2026-03-09T14:38:52.491007+0000 mon.a (mon.0) 20 : cluster [INF] Activating manager daemon y 2026-03-09T14:38:52.828 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: cluster 2026-03-09T14:38:52.498662+0000 mon.a (mon.0) 21 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:38:52.828 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: cluster 2026-03-09T14:38:52.499112+0000 mon.a (mon.0) 22 : cluster [DBG] mgrmap e35: y(active, starting, since 0.00819652s), standbys: x 2026-03-09T14:38:52.828 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.517466+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:38:52.828 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.517562+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:38:52.828 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.517630+0000 mon.a (mon.0) 25 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.518871+0000 mon.a (mon.0) 26 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.518966+0000 mon.a (mon.0) 27 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.519075+0000 mon.a (mon.0) 28 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.519186+0000 mon.a (mon.0) 29 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.519290+0000 mon.a (mon.0) 30 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.519402+0000 mon.a (mon.0) 31 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.519503+0000 mon.a (mon.0) 32 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.519607+0000 mon.a (mon.0) 33 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.519707+0000 mon.a (mon.0) 34 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.519806+0000 mon.a (mon.0) 35 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.520830+0000 mon.a (mon.0) 36 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.520996+0000 mon.a (mon.0) 37 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.521264+0000 mon.a (mon.0) 38 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: cluster 2026-03-09T14:38:52.529448+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: cluster 2026-03-09T14:38:52.536845+0000 mon.a (mon.0) 40 : cluster [DBG] Standby manager daemon x restarted 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: cluster 2026-03-09T14:38:52.536944+0000 mon.a (mon.0) 41 : cluster [DBG] Standby manager daemon x started 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.537071+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.537753+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:38:52.829 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:52 vm11 bash[17885]: audit 2026-03-09T14:38:52.538958+0000 mon.b (mon.2) 116 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:53 vm07 bash[55244]: audit 2026-03-09T14:38:52.539624+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:53 vm07 bash[55244]: audit 2026-03-09T14:38:52.539624+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:53 vm07 bash[55244]: audit 2026-03-09T14:38:52.571118+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:53 vm07 bash[55244]: audit 2026-03-09T14:38:52.571118+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:53 vm07 bash[55244]: audit 2026-03-09T14:38:52.576672+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:53 vm07 bash[55244]: audit 2026-03-09T14:38:52.576672+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:53 vm07 bash[55244]: audit 2026-03-09T14:38:52.634037+0000 mon.a (mon.0) 44 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:53 vm07 bash[55244]: audit 2026-03-09T14:38:52.634037+0000 mon.a (mon.0) 44 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:53 vm07 bash[55244]: cluster 2026-03-09T14:38:53.505625+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e36: y(active, since 1.01471s), standbys: x 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:53 vm07 bash[55244]: cluster 2026-03-09T14:38:53.505625+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e36: y(active, since 1.01471s), standbys: x 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:53 vm07 bash[56315]: audit 2026-03-09T14:38:52.539624+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:53 vm07 bash[56315]: audit 2026-03-09T14:38:52.539624+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:53 vm07 bash[56315]: audit 2026-03-09T14:38:52.571118+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:53 vm07 bash[56315]: audit 2026-03-09T14:38:52.571118+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:53 vm07 bash[56315]: audit 2026-03-09T14:38:52.576672+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:53 vm07 bash[56315]: audit 2026-03-09T14:38:52.576672+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:53.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:53 vm07 bash[56315]: audit 2026-03-09T14:38:52.634037+0000 mon.a (mon.0) 44 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:53.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:53 vm07 bash[56315]: audit 2026-03-09T14:38:52.634037+0000 mon.a (mon.0) 44 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:53.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:53 vm07 bash[56315]: cluster 2026-03-09T14:38:53.505625+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e36: y(active, since 1.01471s), standbys: x 2026-03-09T14:38:53.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:53 vm07 bash[56315]: cluster 2026-03-09T14:38:53.505625+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e36: y(active, since 1.01471s), standbys: x 2026-03-09T14:38:53.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:53 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:38:53] "GET /metrics HTTP/1.1" 200 35001 "" "Prometheus/2.51.0" 2026-03-09T14:38:53.907 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:38:53 vm07 bash[52213]: debug 2026-03-09T14:38:53.520+0000 7efe1493e640 -1 mgr.server handle_report got status from non-daemon mon.a 2026-03-09T14:38:54.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:53 vm11 bash[17885]: audit 2026-03-09T14:38:52.539624+0000 mon.b (mon.2) 117 : audit [DBG] from='mgr.? 192.168.123.111:0/4075428117' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:38:54.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:53 vm11 bash[17885]: audit 2026-03-09T14:38:52.571118+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:38:54.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:53 vm11 bash[17885]: audit 2026-03-09T14:38:52.576672+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:38:54.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:53 vm11 bash[17885]: audit 2026-03-09T14:38:52.634037+0000 mon.a (mon.0) 44 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:38:54.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:53 vm11 bash[17885]: cluster 2026-03-09T14:38:53.505625+0000 mon.a (mon.0) 45 : cluster [DBG] mgrmap e36: y(active, since 1.01471s), standbys: x 2026-03-09T14:38:54.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:54 vm11 bash[41290]: ts=2026-03-09T14:38:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:38:54.610 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:54 vm07 bash[55244]: cephadm 2026-03-09T14:38:53.564291+0000 mgr.y (mgr.44103) 2 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Bus STARTING 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:54 vm07 bash[55244]: cephadm 2026-03-09T14:38:53.564291+0000 mgr.y (mgr.44103) 2 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Bus STARTING 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:54 vm07 bash[55244]: cephadm 2026-03-09T14:38:53.672066+0000 mgr.y (mgr.44103) 3 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:54 vm07 bash[55244]: cephadm 2026-03-09T14:38:53.672066+0000 mgr.y (mgr.44103) 3 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:54 vm07 bash[55244]: cephadm 2026-03-09T14:38:53.672540+0000 mgr.y (mgr.44103) 4 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Client ('192.168.123.107', 46528) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:54 vm07 bash[55244]: cephadm 2026-03-09T14:38:53.672540+0000 mgr.y (mgr.44103) 4 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Client ('192.168.123.107', 46528) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:54 vm07 bash[55244]: cephadm 2026-03-09T14:38:53.773247+0000 mgr.y (mgr.44103) 5 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:54 vm07 bash[55244]: cephadm 2026-03-09T14:38:53.773247+0000 mgr.y (mgr.44103) 5 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:54 vm07 bash[55244]: cephadm 2026-03-09T14:38:53.773285+0000 mgr.y (mgr.44103) 6 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Bus STARTED 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:54 vm07 bash[55244]: cephadm 2026-03-09T14:38:53.773285+0000 mgr.y (mgr.44103) 6 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Bus STARTED 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:54 vm07 bash[56315]: cephadm 2026-03-09T14:38:53.564291+0000 mgr.y (mgr.44103) 2 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Bus STARTING 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:54 vm07 bash[56315]: cephadm 2026-03-09T14:38:53.564291+0000 mgr.y (mgr.44103) 2 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Bus STARTING 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:54 vm07 bash[56315]: cephadm 2026-03-09T14:38:53.672066+0000 mgr.y (mgr.44103) 3 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:54 vm07 bash[56315]: cephadm 2026-03-09T14:38:53.672066+0000 mgr.y (mgr.44103) 3 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:54 vm07 bash[56315]: cephadm 2026-03-09T14:38:53.672540+0000 mgr.y (mgr.44103) 4 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Client ('192.168.123.107', 46528) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:54 vm07 bash[56315]: cephadm 2026-03-09T14:38:53.672540+0000 mgr.y (mgr.44103) 4 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Client ('192.168.123.107', 46528) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:54 vm07 bash[56315]: cephadm 2026-03-09T14:38:53.773247+0000 mgr.y (mgr.44103) 5 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:54 vm07 bash[56315]: cephadm 2026-03-09T14:38:53.773247+0000 mgr.y (mgr.44103) 5 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T14:38:54.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:54 vm07 bash[56315]: cephadm 2026-03-09T14:38:53.773285+0000 mgr.y (mgr.44103) 6 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Bus STARTED 2026-03-09T14:38:54.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:54 vm07 bash[56315]: cephadm 2026-03-09T14:38:53.773285+0000 mgr.y (mgr.44103) 6 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Bus STARTED 2026-03-09T14:38:54.984 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (76s) 15s ago 6m 14.3M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (74s) 15s ago 6m 38.8M - dad864ee21e9 614f6a00be7a 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (37s) 15s ago 5m 41.7M - 3.5 e1d6a67b021e e3b30dab288c 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 running (35s) 15s ago 8m 462M - 19.2.3-678-ge911bdeb 654f31e6858e d35dddd392d1 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (65s) 15s ago 9m 524M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (9m) 15s ago 9m 56.5M 2048M 17.2.0 e1d6a67b021e 47602ca6fae7 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (9m) 15s ago 9m 46.2M 2048M 17.2.0 e1d6a67b021e eac3b7829b01 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (21s) 15s ago 9m 21.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ff7dfe3a6c7c 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (72s) 15s ago 6m 6863k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (70s) 15s ago 6m 6907k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (8m) 15s ago 8m 50.7M 4096M 17.2.0 e1d6a67b021e 7a4a11fbf70d 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (8m) 15s ago 8m 52.4M 4096M 17.2.0 e1d6a67b021e 15e2e23b506b 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (8m) 15s ago 8m 48.2M 4096M 17.2.0 e1d6a67b021e fe41cd2240dc 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (7m) 15s ago 7m 50.0M 4096M 17.2.0 e1d6a67b021e b07b01a0b5aa 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (7m) 15s ago 7m 51.2M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (7m) 15s ago 7m 48.4M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (7m) 15s ago 7m 48.7M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (6m) 15s ago 6m 50.3M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (37s) 15s ago 6m 42.6M - 2.51.0 1d3b7f56885b e88f0339687c 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (5m) 15s ago 5m 84.8M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (5m) 15s ago 5m 84.2M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (5m) 15s ago 5m 84.4M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:38:54.985 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (5m) 15s ago 5m 84.2M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:38:55.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:54 vm11 bash[17885]: cephadm 2026-03-09T14:38:53.564291+0000 mgr.y (mgr.44103) 2 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Bus STARTING 2026-03-09T14:38:55.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:54 vm11 bash[17885]: cephadm 2026-03-09T14:38:53.672066+0000 mgr.y (mgr.44103) 3 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Serving on https://192.168.123.107:7150 2026-03-09T14:38:55.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:54 vm11 bash[17885]: cephadm 2026-03-09T14:38:53.672540+0000 mgr.y (mgr.44103) 4 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Client ('192.168.123.107', 46528) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:38:55.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:54 vm11 bash[17885]: cephadm 2026-03-09T14:38:53.773247+0000 mgr.y (mgr.44103) 5 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Serving on http://192.168.123.107:8765 2026-03-09T14:38:55.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:54 vm11 bash[17885]: cephadm 2026-03-09T14:38:53.773285+0000 mgr.y (mgr.44103) 6 : cephadm [INF] [09/Mar/2026:14:38:53] ENGINE Bus STARTED 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 1, 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-09T14:38:55.214 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T14:38:55.215 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:38:55.215 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout: "mgr" 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "3/23 daemons upgraded", 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout: "message": "", 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:38:55.409 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:38:55.642 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: cluster 2026-03-09T14:38:54.520899+0000 mgr.y (mgr.44103) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: cluster 2026-03-09T14:38:54.520899+0000 mgr.y (mgr.44103) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: cluster 2026-03-09T14:38:54.559217+0000 mon.a (mon.0) 46 : cluster [DBG] mgrmap e37: y(active, since 2s), standbys: x 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: cluster 2026-03-09T14:38:54.559217+0000 mon.a (mon.0) 46 : cluster [DBG] mgrmap e37: y(active, since 2s), standbys: x 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: audit 2026-03-09T14:38:54.607138+0000 mgr.y (mgr.44103) 8 : audit [DBG] from='client.24926 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: audit 2026-03-09T14:38:54.607138+0000 mgr.y (mgr.44103) 8 : audit [DBG] from='client.24926 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: audit 2026-03-09T14:38:54.800282+0000 mgr.y (mgr.44103) 9 : audit [DBG] from='client.24932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: audit 2026-03-09T14:38:54.800282+0000 mgr.y (mgr.44103) 9 : audit [DBG] from='client.24932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: audit 2026-03-09T14:38:54.987646+0000 mgr.y (mgr.44103) 10 : audit [DBG] from='client.34126 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: audit 2026-03-09T14:38:54.987646+0000 mgr.y (mgr.44103) 10 : audit [DBG] from='client.34126 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: audit 2026-03-09T14:38:55.220802+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.107:0/3457510474' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: audit 2026-03-09T14:38:55.220802+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.107:0/3457510474' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: audit 2026-03-09T14:38:55.415852+0000 mgr.y (mgr.44103) 11 : audit [DBG] from='client.34135 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:55 vm07 bash[55244]: audit 2026-03-09T14:38:55.415852+0000 mgr.y (mgr.44103) 11 : audit [DBG] from='client.34135 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: cluster 2026-03-09T14:38:54.520899+0000 mgr.y (mgr.44103) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: cluster 2026-03-09T14:38:54.520899+0000 mgr.y (mgr.44103) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: cluster 2026-03-09T14:38:54.559217+0000 mon.a (mon.0) 46 : cluster [DBG] mgrmap e37: y(active, since 2s), standbys: x 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: cluster 2026-03-09T14:38:54.559217+0000 mon.a (mon.0) 46 : cluster [DBG] mgrmap e37: y(active, since 2s), standbys: x 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: audit 2026-03-09T14:38:54.607138+0000 mgr.y (mgr.44103) 8 : audit [DBG] from='client.24926 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: audit 2026-03-09T14:38:54.607138+0000 mgr.y (mgr.44103) 8 : audit [DBG] from='client.24926 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: audit 2026-03-09T14:38:54.800282+0000 mgr.y (mgr.44103) 9 : audit [DBG] from='client.24932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: audit 2026-03-09T14:38:54.800282+0000 mgr.y (mgr.44103) 9 : audit [DBG] from='client.24932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: audit 2026-03-09T14:38:54.987646+0000 mgr.y (mgr.44103) 10 : audit [DBG] from='client.34126 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: audit 2026-03-09T14:38:54.987646+0000 mgr.y (mgr.44103) 10 : audit [DBG] from='client.34126 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: audit 2026-03-09T14:38:55.220802+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.107:0/3457510474' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:55.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: audit 2026-03-09T14:38:55.220802+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.107:0/3457510474' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:55.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: audit 2026-03-09T14:38:55.415852+0000 mgr.y (mgr.44103) 11 : audit [DBG] from='client.34135 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:55.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:55 vm07 bash[56315]: audit 2026-03-09T14:38:55.415852+0000 mgr.y (mgr.44103) 11 : audit [DBG] from='client.34135 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:56.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:55 vm11 bash[17885]: cluster 2026-03-09T14:38:54.520899+0000 mgr.y (mgr.44103) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:56.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:55 vm11 bash[17885]: cluster 2026-03-09T14:38:54.559217+0000 mon.a (mon.0) 46 : cluster [DBG] mgrmap e37: y(active, since 2s), standbys: x 2026-03-09T14:38:56.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:55 vm11 bash[17885]: audit 2026-03-09T14:38:54.607138+0000 mgr.y (mgr.44103) 8 : audit [DBG] from='client.24926 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:56.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:55 vm11 bash[17885]: audit 2026-03-09T14:38:54.800282+0000 mgr.y (mgr.44103) 9 : audit [DBG] from='client.24932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:56.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:55 vm11 bash[17885]: audit 2026-03-09T14:38:54.987646+0000 mgr.y (mgr.44103) 10 : audit [DBG] from='client.34126 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:56.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:55 vm11 bash[17885]: audit 2026-03-09T14:38:55.220802+0000 mon.c (mon.1) 6 : audit [DBG] from='client.? 192.168.123.107:0/3457510474' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:38:56.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:55 vm11 bash[17885]: audit 2026-03-09T14:38:55.415852+0000 mgr.y (mgr.44103) 11 : audit [DBG] from='client.34135 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:38:56.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:56 vm07 bash[55244]: audit 2026-03-09T14:38:55.649991+0000 mon.a (mon.0) 47 : audit [DBG] from='client.? 192.168.123.107:0/4105928405' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:38:56.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:56 vm07 bash[55244]: audit 2026-03-09T14:38:55.649991+0000 mon.a (mon.0) 47 : audit [DBG] from='client.? 192.168.123.107:0/4105928405' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:38:56.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:56 vm07 bash[56315]: audit 2026-03-09T14:38:55.649991+0000 mon.a (mon.0) 47 : audit [DBG] from='client.? 192.168.123.107:0/4105928405' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:38:56.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:56 vm07 bash[56315]: audit 2026-03-09T14:38:55.649991+0000 mon.a (mon.0) 47 : audit [DBG] from='client.? 192.168.123.107:0/4105928405' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:38:56.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:56 vm11 bash[17885]: audit 2026-03-09T14:38:55.649991+0000 mon.a (mon.0) 47 : audit [DBG] from='client.? 192.168.123.107:0/4105928405' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:38:57.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:38:56 vm11 bash[41290]: ts=2026-03-09T14:38:56.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:38:57.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:57 vm07 bash[55244]: cluster 2026-03-09T14:38:56.521185+0000 mgr.y (mgr.44103) 12 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:57.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:57 vm07 bash[55244]: cluster 2026-03-09T14:38:56.521185+0000 mgr.y (mgr.44103) 12 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:57.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:57 vm07 bash[55244]: cluster 2026-03-09T14:38:56.590967+0000 mon.a (mon.0) 48 : cluster [DBG] mgrmap e38: y(active, since 4s), standbys: x 2026-03-09T14:38:57.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:57 vm07 bash[55244]: cluster 2026-03-09T14:38:56.590967+0000 mon.a (mon.0) 48 : cluster [DBG] mgrmap e38: y(active, since 4s), standbys: x 2026-03-09T14:38:57.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:57 vm07 bash[55244]: audit 2026-03-09T14:38:57.454341+0000 mgr.y (mgr.44103) 13 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:57.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:57 vm07 bash[55244]: audit 2026-03-09T14:38:57.454341+0000 mgr.y (mgr.44103) 13 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:57.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:57 vm07 bash[56315]: cluster 2026-03-09T14:38:56.521185+0000 mgr.y (mgr.44103) 12 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:57.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:57 vm07 bash[56315]: cluster 2026-03-09T14:38:56.521185+0000 mgr.y (mgr.44103) 12 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:57.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:57 vm07 bash[56315]: cluster 2026-03-09T14:38:56.590967+0000 mon.a (mon.0) 48 : cluster [DBG] mgrmap e38: y(active, since 4s), standbys: x 2026-03-09T14:38:57.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:57 vm07 bash[56315]: cluster 2026-03-09T14:38:56.590967+0000 mon.a (mon.0) 48 : cluster [DBG] mgrmap e38: y(active, since 4s), standbys: x 2026-03-09T14:38:57.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:57 vm07 bash[56315]: audit 2026-03-09T14:38:57.454341+0000 mgr.y (mgr.44103) 13 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:57.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:57 vm07 bash[56315]: audit 2026-03-09T14:38:57.454341+0000 mgr.y (mgr.44103) 13 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:58.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:57 vm11 bash[17885]: cluster 2026-03-09T14:38:56.521185+0000 mgr.y (mgr.44103) 12 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:38:58.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:57 vm11 bash[17885]: cluster 2026-03-09T14:38:56.590967+0000 mon.a (mon.0) 48 : cluster [DBG] mgrmap e38: y(active, since 4s), standbys: x 2026-03-09T14:38:58.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:57 vm11 bash[17885]: audit 2026-03-09T14:38:57.454341+0000 mgr.y (mgr.44103) 13 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:38:59.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:59 vm11 bash[17885]: audit 2026-03-09T14:38:58.416747+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:59 vm11 bash[17885]: audit 2026-03-09T14:38:58.423289+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:59 vm11 bash[17885]: audit 2026-03-09T14:38:58.475641+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:59 vm11 bash[17885]: audit 2026-03-09T14:38:58.482252+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:59 vm11 bash[17885]: audit 2026-03-09T14:38:59.003989+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:59 vm11 bash[17885]: audit 2026-03-09T14:38:59.009512+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:59 vm11 bash[17885]: audit 2026-03-09T14:38:59.011130+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:59.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:59 vm11 bash[17885]: audit 2026-03-09T14:38:59.068022+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:38:59 vm11 bash[17885]: audit 2026-03-09T14:38:59.074603+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:58.416747+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:58.416747+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:58.423289+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:58.423289+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:58.475641+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:58.475641+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:58.482252+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:58.482252+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:59.003989+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:59.003989+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:59.009512+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:59.009512+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:59.011130+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:59.011130+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:59.068022+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:59.068022+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:59.074603+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:38:59 vm07 bash[55244]: audit 2026-03-09T14:38:59.074603+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:58.416747+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:58.416747+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:58.423289+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:58.423289+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:58.475641+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:58.475641+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:58.482252+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:58.482252+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:59.003989+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:59.003989+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:59.009512+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:59.009512+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:59.011130+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:59.011130+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:59.068022+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:59.068022+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:59.074603+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:38:59.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:38:59 vm07 bash[56315]: audit 2026-03-09T14:38:59.074603+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:00.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:00 vm11 bash[17885]: cluster 2026-03-09T14:38:58.521663+0000 mgr.y (mgr.44103) 14 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:00.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:00 vm07 bash[55244]: cluster 2026-03-09T14:38:58.521663+0000 mgr.y (mgr.44103) 14 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:00.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:00 vm07 bash[55244]: cluster 2026-03-09T14:38:58.521663+0000 mgr.y (mgr.44103) 14 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:00.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:00 vm07 bash[56315]: cluster 2026-03-09T14:38:58.521663+0000 mgr.y (mgr.44103) 14 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:00.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:00 vm07 bash[56315]: cluster 2026-03-09T14:38:58.521663+0000 mgr.y (mgr.44103) 14 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:02.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:02 vm11 bash[17885]: cluster 2026-03-09T14:39:00.522154+0000 mgr.y (mgr.44103) 15 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:39:02.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:02 vm07 bash[55244]: cluster 2026-03-09T14:39:00.522154+0000 mgr.y (mgr.44103) 15 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:39:02.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:02 vm07 bash[55244]: cluster 2026-03-09T14:39:00.522154+0000 mgr.y (mgr.44103) 15 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:39:02.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:02 vm07 bash[56315]: cluster 2026-03-09T14:39:00.522154+0000 mgr.y (mgr.44103) 15 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:39:02.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:02 vm07 bash[56315]: cluster 2026-03-09T14:39:00.522154+0000 mgr.y (mgr.44103) 15 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:39:03.871 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:03 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:39:03] "GET /metrics HTTP/1.1" 200 35001 "" "Prometheus/2.51.0" 2026-03-09T14:39:04.429 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:04 vm11 bash[41290]: ts=2026-03-09T14:39:04.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:04.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:04 vm11 bash[17885]: cluster 2026-03-09T14:39:02.522503+0000 mgr.y (mgr.44103) 16 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:39:04.870 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:04 vm07 bash[55244]: cluster 2026-03-09T14:39:02.522503+0000 mgr.y (mgr.44103) 16 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:39:04.870 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:04 vm07 bash[55244]: cluster 2026-03-09T14:39:02.522503+0000 mgr.y (mgr.44103) 16 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:39:04.870 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:04 vm07 bash[56315]: cluster 2026-03-09T14:39:02.522503+0000 mgr.y (mgr.44103) 16 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:39:04.871 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:04 vm07 bash[56315]: cluster 2026-03-09T14:39:02.522503+0000 mgr.y (mgr.44103) 16 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: cluster 2026-03-09T14:39:04.522994+0000 mgr.y (mgr.44103) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.582662+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.587329+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.588309+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.588910+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.589313+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.730757+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.735947+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.739742+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.743928+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.748698+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.787648+0000 mon.a (mon.0) 68 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.788925+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.789724+0000 mon.a (mon.0) 70 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:05.790211+0000 mon.a (mon.0) 71 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:06.195849+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:06.199811+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:06.200913+0000 mon.a (mon.0) 74 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:06.436 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: audit 2026-03-09T14:39:06.201292+0000 mon.a (mon.0) 75 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:06.731 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:06.731 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 systemd[1]: Stopping Ceph mon.b for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:39:06.731 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:39:06 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:06.731 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:39:06 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:06.731 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:39:06 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:06.731 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:39:06 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:06.731 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:39:06 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:06.731 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:06 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:06.732 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:39:06 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:06.732 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:39:06 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: cluster 2026-03-09T14:39:04.522994+0000 mgr.y (mgr.44103) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: cluster 2026-03-09T14:39:04.522994+0000 mgr.y (mgr.44103) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.582662+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.582662+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.587329+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.587329+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.588309+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.588309+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.588910+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.588910+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.589313+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.589313+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.730757+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.730757+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.735947+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.735947+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.739742+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.739742+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.743928+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.743928+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.748698+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.748698+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.787648+0000 mon.a (mon.0) 68 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.787648+0000 mon.a (mon.0) 68 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.788925+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.788925+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.789724+0000 mon.a (mon.0) 70 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.789724+0000 mon.a (mon.0) 70 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.790211+0000 mon.a (mon.0) 71 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:05.790211+0000 mon.a (mon.0) 71 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-09T14:39:06.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:06.195849+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:06.195849+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:06.199811+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:06.199811+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:06.200913+0000 mon.a (mon.0) 74 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:06.200913+0000 mon.a (mon.0) 74 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:06.201292+0000 mon.a (mon.0) 75 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:06 vm07 bash[55244]: audit 2026-03-09T14:39:06.201292+0000 mon.a (mon.0) 75 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: cluster 2026-03-09T14:39:04.522994+0000 mgr.y (mgr.44103) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: cluster 2026-03-09T14:39:04.522994+0000 mgr.y (mgr.44103) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.582662+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.582662+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.587329+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.587329+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.588309+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.588309+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.588910+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.588910+0000 mon.a (mon.0) 61 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.589313+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.589313+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.730757+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.730757+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.735947+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.735947+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.739742+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.739742+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.743928+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.743928+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.748698+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.748698+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.787648+0000 mon.a (mon.0) 68 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.787648+0000 mon.a (mon.0) 68 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.788925+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.788925+0000 mon.a (mon.0) 69 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.789724+0000 mon.a (mon.0) 70 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.789724+0000 mon.a (mon.0) 70 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.790211+0000 mon.a (mon.0) 71 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:05.790211+0000 mon.a (mon.0) 71 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:06.195849+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:06.195849+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:06.199811+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:06.199811+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:06.200913+0000 mon.a (mon.0) 74 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:06.200913+0000 mon.a (mon.0) 74 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:06.201292+0000 mon.a (mon.0) 75 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:06.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:06 vm07 bash[56315]: audit 2026-03-09T14:39:06.201292+0000 mon.a (mon.0) 75 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:07.003 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:06 vm11 bash[41290]: ts=2026-03-09T14:39:06.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:07.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: debug 2026-03-09T14:39:06.763+0000 7f465174d700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T14:39:07.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:06 vm11 bash[17885]: debug 2026-03-09T14:39:06.763+0000 7f465174d700 -1 mon.b@2(peon) e3 *** Got Signal Terminated *** 2026-03-09T14:39:07.441 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:07.442 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:07.442 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:07.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:07.442 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:07.442 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:07.442 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43466]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mon-b 2026-03-09T14:39:07.442 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.b.service: Deactivated successfully. 2026-03-09T14:39:07.442 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: Stopped Ceph mon.b for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:39:07.442 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:07.442 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:07.442 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:07.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 systemd[1]: Started Ceph mon.b for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:39:07.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.535+0000 7f4155213d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T14:39:07.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.535+0000 7f4155213d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T14:39:07.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.535+0000 7f4155213d80 0 pidfile_write: ignore empty --pid-file 2026-03-09T14:39:07.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.535+0000 7f4155213d80 0 load: jerasure load: lrc 2026-03-09T14:39:07.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T14:39:07.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Git sha 0 2026-03-09T14:39:07.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T14:39:07.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: DB SUMMARY 2026-03-09T14:39:07.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: DB Session ID: AGJQZ98WEXPHQNZ4L4BD 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 876 Bytes 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 1, files: 000024.sst 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000022.log size: 399661 ; 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.env: 0x55929f998dc0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.info_log: 0x5592a57c77e0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.db_log_dir: 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.wal_dir: 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.write_buffer_manager: 0x5592a57cb900 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.row_cache: None 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.wal_filter: None 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T14:39:07.756 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Compression algorithms supported: 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: kZSTD supported: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000009 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.merge_operator: 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T14:39:07.757 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5592a57c63c0) 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: cache_index_and_filter_blocks: 1 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: pin_top_level_index_and_filter: 1 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: index_type: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: data_block_index_type: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: index_shortening: 1 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: checksum: 4 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: no_block_cache: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: block_cache: 0x5592a57ed350 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: block_cache_name: BinnedLRUCache 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: block_cache_options: 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: capacity : 536870912 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: num_shard_bits : 4 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: strict_capacity_limit : 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: high_pri_pool_ratio: 0.000 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: block_cache_compressed: (nil) 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: persistent_cache: (nil) 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: block_size: 4096 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: block_size_deviation: 10 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: block_restart_interval: 16 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: index_block_restart_interval: 1 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: metadata_block_size: 4096 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: partition_filters: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: use_delta_encoding: 1 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: filter_policy: bloomfilter 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: whole_key_filtering: 1 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: verify_compression: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: read_amp_bytes_per_bit: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: format_version: 5 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: enable_index_compression: 1 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: block_align: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: max_auto_readahead_size: 262144 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: prepopulate_block_cache: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: initial_auto_readahead_size: 8192 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: num_file_reads_for_auto_readahead: 2 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.num_levels: 7 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T14:39:07.758 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T14:39:07.759 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.539+0000 7f4155213d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.543+0000 7f414f7e6640 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 24.sst 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.543+0000 7f4155213d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.543+0000 7f4155213d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 26, last_sequence is 12744, log_number is 22,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.543+0000 7f4155213d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 22 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.543+0000 7f4155213d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 53381caa-f90a-4a54-bd06-d6eb933c9e82 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.543+0000 7f4155213d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773067147546952, "job": 1, "event": "recovery_started", "wal_files": [22]} 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.543+0000 7f4155213d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #22 mode 2 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.547+0000 7f4155213d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773067147549608, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 27, "file_size": 250269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12749, "largest_seqno": 12861, "table_properties": {"data_size": 248456, "index_size": 588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 197, "raw_key_size": 1552, "raw_average_key_size": 25, "raw_value_size": 246789, "raw_average_value_size": 4113, "num_data_blocks": 24, "num_entries": 60, "num_filter_entries": 60, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773067147, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "53381caa-f90a-4a54-bd06-d6eb933c9e82", "db_session_id": "AGJQZ98WEXPHQNZ4L4BD", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.547+0000 7f4155213d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773067147549818, "job": 1, "event": "recovery_finished"} 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.547+0000 7f4155213d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 29 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.547+0000 7f4155213d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-b/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5592a57eee00 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 4 rocksdb: DB pointer 0x5592a58fa000 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 0 starting mon.b rank 2 at public addrs [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] at bind addrs [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon_data /var/lib/ceph/mon/ceph-b fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 1 mon.b@-1(???) e3 preinit fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 0 mon.b@-1(???).mds e1 new map 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 0 mon.b@-1(???).mds e1 print_map 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: e1 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: btime 1970-01-01T00:00:00:000000+0000 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: legacy client fscid: -1 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: No filesystems configured 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 0 mon.b@-1(???).osd e92 crush map has features 3314933000854323200, adjusting msgr requires 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 0 mon.b@-1(???).osd e92 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 0 mon.b@-1(???).osd e92 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 0 mon.b@-1(???).osd e92 crush map has features 432629239337189376, adjusting msgr requires 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.551+0000 7f4155213d80 1 mon.b@-1(???).paxosservice(auth 1..23) refresh upgraded, format 0 -> 3 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.555+0000 7f414afdd640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: debug 2026-03-09T14:39:07.555+0000 7f414afdd640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: ** DB Stats ** 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: ** Compaction Stats [default] ** 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: L0 1/0 244.40 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 112.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: L6 1/0 10.97 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Sum 2/0 11.21 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 112.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 112.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: ** Compaction Stats [default] ** 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 112.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T14:39:07.760 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T14:39:07.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T14:39:07.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Cumulative compaction: 0.00 GB write, 15.27 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:39:07.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Interval compaction: 0.00 GB write, 15.27 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T14:39:07.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T14:39:07.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Block cache BinnedLRUCache@0x5592a57ed350#7 capacity: 512.00 MB usage: 429.33 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-09T14:39:07.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: Block cache entry stats(count,size,portion): DataBlock(32,399.28 KB,0.0761569%) FilterBlock(2,9.27 KB,0.00176728%) IndexBlock(2,20.78 KB,0.00396371%) Misc(1,0.00 KB,0%) 2026-03-09T14:39:07.761 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:07 vm11 bash[43577]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:07.462747+0000 mgr.y (mgr.44103) 30 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:07.462747+0000 mgr.y (mgr.44103) 30 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.980955+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.980955+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:07.984091+0000 mon.a (mon.0) 92 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:07.984091+0000 mon.a (mon.0) 92 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:07.984240+0000 mon.a (mon.0) 93 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:07.984240+0000 mon.a (mon.0) 93 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:07.984271+0000 mon.a (mon.0) 94 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:07.984271+0000 mon.a (mon.0) 94 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.984621+0000 mon.a (mon.0) 95 : cluster [INF] mon.a calling monitor election 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.984621+0000 mon.a (mon.0) 95 : cluster [INF] mon.a calling monitor election 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.984773+0000 mon.c (mon.1) 7 : cluster [INF] mon.c calling monitor election 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.984773+0000 mon.c (mon.1) 7 : cluster [INF] mon.c calling monitor election 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.986799+0000 mon.a (mon.0) 96 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.986799+0000 mon.a (mon.0) 96 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991101+0000 mon.a (mon.0) 97 : cluster [DBG] monmap epoch 4 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991101+0000 mon.a (mon.0) 97 : cluster [DBG] monmap epoch 4 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991144+0000 mon.a (mon.0) 98 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991144+0000 mon.a (mon.0) 98 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991183+0000 mon.a (mon.0) 99 : cluster [DBG] last_changed 2026-03-09T14:39:07.972618+0000 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991183+0000 mon.a (mon.0) 99 : cluster [DBG] last_changed 2026-03-09T14:39:07.972618+0000 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991222+0000 mon.a (mon.0) 100 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991222+0000 mon.a (mon.0) 100 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991261+0000 mon.a (mon.0) 101 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991261+0000 mon.a (mon.0) 101 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991302+0000 mon.a (mon.0) 102 : cluster [DBG] election_strategy: 1 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991302+0000 mon.a (mon.0) 102 : cluster [DBG] election_strategy: 1 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991341+0000 mon.a (mon.0) 103 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991341+0000 mon.a (mon.0) 103 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991389+0000 mon.a (mon.0) 104 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991389+0000 mon.a (mon.0) 104 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991427+0000 mon.a (mon.0) 105 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991427+0000 mon.a (mon.0) 105 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991771+0000 mon.a (mon.0) 106 : cluster [DBG] fsmap 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991771+0000 mon.a (mon.0) 106 : cluster [DBG] fsmap 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991847+0000 mon.a (mon.0) 107 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.991847+0000 mon.a (mon.0) 107 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.992273+0000 mon.a (mon.0) 108 : cluster [DBG] mgrmap e38: y(active, since 15s), standbys: x 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.992273+0000 mon.a (mon.0) 108 : cluster [DBG] mgrmap e38: y(active, since 15s), standbys: x 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.992544+0000 mon.a (mon.0) 109 : cluster [INF] overall HEALTH_OK 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: cluster 2026-03-09T14:39:07.992544+0000 mon.a (mon.0) 109 : cluster [INF] overall HEALTH_OK 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:07.998212+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:07.998212+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:08.003306+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:08 vm11 bash[43577]: audit 2026-03-09T14:39:08.003306+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:07.462747+0000 mgr.y (mgr.44103) 30 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:07.462747+0000 mgr.y (mgr.44103) 30 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.980955+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.980955+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:07.984091+0000 mon.a (mon.0) 92 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:07.984091+0000 mon.a (mon.0) 92 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:07.984240+0000 mon.a (mon.0) 93 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:07.984240+0000 mon.a (mon.0) 93 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:07.984271+0000 mon.a (mon.0) 94 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:07.984271+0000 mon.a (mon.0) 94 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.984621+0000 mon.a (mon.0) 95 : cluster [INF] mon.a calling monitor election 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.984621+0000 mon.a (mon.0) 95 : cluster [INF] mon.a calling monitor election 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.984773+0000 mon.c (mon.1) 7 : cluster [INF] mon.c calling monitor election 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.984773+0000 mon.c (mon.1) 7 : cluster [INF] mon.c calling monitor election 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.986799+0000 mon.a (mon.0) 96 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:39:09.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.986799+0000 mon.a (mon.0) 96 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991101+0000 mon.a (mon.0) 97 : cluster [DBG] monmap epoch 4 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991101+0000 mon.a (mon.0) 97 : cluster [DBG] monmap epoch 4 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991144+0000 mon.a (mon.0) 98 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991144+0000 mon.a (mon.0) 98 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991183+0000 mon.a (mon.0) 99 : cluster [DBG] last_changed 2026-03-09T14:39:07.972618+0000 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991183+0000 mon.a (mon.0) 99 : cluster [DBG] last_changed 2026-03-09T14:39:07.972618+0000 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991222+0000 mon.a (mon.0) 100 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991222+0000 mon.a (mon.0) 100 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:07.462747+0000 mgr.y (mgr.44103) 30 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:07.462747+0000 mgr.y (mgr.44103) 30 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.980955+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.980955+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:07.984091+0000 mon.a (mon.0) 92 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:07.984091+0000 mon.a (mon.0) 92 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:07.984240+0000 mon.a (mon.0) 93 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:07.984240+0000 mon.a (mon.0) 93 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:07.984271+0000 mon.a (mon.0) 94 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:07.984271+0000 mon.a (mon.0) 94 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.984621+0000 mon.a (mon.0) 95 : cluster [INF] mon.a calling monitor election 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.984621+0000 mon.a (mon.0) 95 : cluster [INF] mon.a calling monitor election 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.984773+0000 mon.c (mon.1) 7 : cluster [INF] mon.c calling monitor election 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.984773+0000 mon.c (mon.1) 7 : cluster [INF] mon.c calling monitor election 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.986799+0000 mon.a (mon.0) 96 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.986799+0000 mon.a (mon.0) 96 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991101+0000 mon.a (mon.0) 97 : cluster [DBG] monmap epoch 4 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991101+0000 mon.a (mon.0) 97 : cluster [DBG] monmap epoch 4 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991144+0000 mon.a (mon.0) 98 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991144+0000 mon.a (mon.0) 98 : cluster [DBG] fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991183+0000 mon.a (mon.0) 99 : cluster [DBG] last_changed 2026-03-09T14:39:07.972618+0000 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991183+0000 mon.a (mon.0) 99 : cluster [DBG] last_changed 2026-03-09T14:39:07.972618+0000 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991222+0000 mon.a (mon.0) 100 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991222+0000 mon.a (mon.0) 100 : cluster [DBG] created 2026-03-09T14:29:18.743288+0000 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991261+0000 mon.a (mon.0) 101 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991261+0000 mon.a (mon.0) 101 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991302+0000 mon.a (mon.0) 102 : cluster [DBG] election_strategy: 1 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991302+0000 mon.a (mon.0) 102 : cluster [DBG] election_strategy: 1 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991341+0000 mon.a (mon.0) 103 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991341+0000 mon.a (mon.0) 103 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991389+0000 mon.a (mon.0) 104 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991389+0000 mon.a (mon.0) 104 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991427+0000 mon.a (mon.0) 105 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991427+0000 mon.a (mon.0) 105 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991771+0000 mon.a (mon.0) 106 : cluster [DBG] fsmap 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991771+0000 mon.a (mon.0) 106 : cluster [DBG] fsmap 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991847+0000 mon.a (mon.0) 107 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.991847+0000 mon.a (mon.0) 107 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.992273+0000 mon.a (mon.0) 108 : cluster [DBG] mgrmap e38: y(active, since 15s), standbys: x 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.992273+0000 mon.a (mon.0) 108 : cluster [DBG] mgrmap e38: y(active, since 15s), standbys: x 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.992544+0000 mon.a (mon.0) 109 : cluster [INF] overall HEALTH_OK 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: cluster 2026-03-09T14:39:07.992544+0000 mon.a (mon.0) 109 : cluster [INF] overall HEALTH_OK 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:07.998212+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:07.998212+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.407 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:08.003306+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:08 vm07 bash[55244]: audit 2026-03-09T14:39:08.003306+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991261+0000 mon.a (mon.0) 101 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991261+0000 mon.a (mon.0) 101 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991302+0000 mon.a (mon.0) 102 : cluster [DBG] election_strategy: 1 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991302+0000 mon.a (mon.0) 102 : cluster [DBG] election_strategy: 1 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991341+0000 mon.a (mon.0) 103 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991341+0000 mon.a (mon.0) 103 : cluster [DBG] 0: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.a 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991389+0000 mon.a (mon.0) 104 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991389+0000 mon.a (mon.0) 104 : cluster [DBG] 1: [v2:192.168.123.107:3301/0,v1:192.168.123.107:6790/0] mon.c 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991427+0000 mon.a (mon.0) 105 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991427+0000 mon.a (mon.0) 105 : cluster [DBG] 2: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991771+0000 mon.a (mon.0) 106 : cluster [DBG] fsmap 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991771+0000 mon.a (mon.0) 106 : cluster [DBG] fsmap 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991847+0000 mon.a (mon.0) 107 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.991847+0000 mon.a (mon.0) 107 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.992273+0000 mon.a (mon.0) 108 : cluster [DBG] mgrmap e38: y(active, since 15s), standbys: x 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.992273+0000 mon.a (mon.0) 108 : cluster [DBG] mgrmap e38: y(active, since 15s), standbys: x 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.992544+0000 mon.a (mon.0) 109 : cluster [INF] overall HEALTH_OK 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: cluster 2026-03-09T14:39:07.992544+0000 mon.a (mon.0) 109 : cluster [INF] overall HEALTH_OK 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:07.998212+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:07.998212+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:08.003306+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:09.408 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:08 vm07 bash[56315]: audit 2026-03-09T14:39:08.003306+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:10.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:10 vm11 bash[43577]: cluster 2026-03-09T14:39:08.523603+0000 mgr.y (mgr.44103) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:39:10.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:10 vm11 bash[43577]: cluster 2026-03-09T14:39:08.523603+0000 mgr.y (mgr.44103) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:39:10.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:10 vm07 bash[55244]: cluster 2026-03-09T14:39:08.523603+0000 mgr.y (mgr.44103) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:39:10.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:10 vm07 bash[55244]: cluster 2026-03-09T14:39:08.523603+0000 mgr.y (mgr.44103) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:39:10.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:10 vm07 bash[56315]: cluster 2026-03-09T14:39:08.523603+0000 mgr.y (mgr.44103) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:39:10.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:10 vm07 bash[56315]: cluster 2026-03-09T14:39:08.523603+0000 mgr.y (mgr.44103) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:39:12.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:12 vm07 bash[55244]: cluster 2026-03-09T14:39:10.524085+0000 mgr.y (mgr.44103) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:12.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:12 vm07 bash[55244]: cluster 2026-03-09T14:39:10.524085+0000 mgr.y (mgr.44103) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:12.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:12 vm07 bash[56315]: cluster 2026-03-09T14:39:10.524085+0000 mgr.y (mgr.44103) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:12.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:12 vm07 bash[56315]: cluster 2026-03-09T14:39:10.524085+0000 mgr.y (mgr.44103) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:12.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:12 vm11 bash[43577]: cluster 2026-03-09T14:39:10.524085+0000 mgr.y (mgr.44103) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:12.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:12 vm11 bash[43577]: cluster 2026-03-09T14:39:10.524085+0000 mgr.y (mgr.44103) 32 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:39:12.906 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:12 vm07 bash[52213]: debug 2026-03-09T14:39:12.565+0000 7efe1493e640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-09T14:39:13.906 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:13 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:39:13] "GET /metrics HTTP/1.1" 200 37739 "" "Prometheus/2.51.0" 2026-03-09T14:39:14.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:14 vm11 bash[41290]: ts=2026-03-09T14:39:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:14.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:14 vm11 bash[43577]: cluster 2026-03-09T14:39:12.524356+0000 mgr.y (mgr.44103) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:14.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:14 vm11 bash[43577]: cluster 2026-03-09T14:39:12.524356+0000 mgr.y (mgr.44103) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:14.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:14 vm11 bash[43577]: audit 2026-03-09T14:39:13.280742+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:14 vm11 bash[43577]: audit 2026-03-09T14:39:13.280742+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:14 vm11 bash[43577]: audit 2026-03-09T14:39:13.286719+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:14 vm11 bash[43577]: audit 2026-03-09T14:39:13.286719+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:14 vm11 bash[43577]: audit 2026-03-09T14:39:13.806975+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:14 vm11 bash[43577]: audit 2026-03-09T14:39:13.806975+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:14 vm11 bash[43577]: audit 2026-03-09T14:39:13.811742+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.504 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:14 vm11 bash[43577]: audit 2026-03-09T14:39:13.811742+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:14 vm07 bash[56315]: cluster 2026-03-09T14:39:12.524356+0000 mgr.y (mgr.44103) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:14 vm07 bash[56315]: cluster 2026-03-09T14:39:12.524356+0000 mgr.y (mgr.44103) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:14 vm07 bash[56315]: audit 2026-03-09T14:39:13.280742+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:14 vm07 bash[56315]: audit 2026-03-09T14:39:13.280742+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:14 vm07 bash[56315]: audit 2026-03-09T14:39:13.286719+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:14 vm07 bash[56315]: audit 2026-03-09T14:39:13.286719+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:14 vm07 bash[56315]: audit 2026-03-09T14:39:13.806975+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:14 vm07 bash[56315]: audit 2026-03-09T14:39:13.806975+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:14 vm07 bash[56315]: audit 2026-03-09T14:39:13.811742+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:14 vm07 bash[56315]: audit 2026-03-09T14:39:13.811742+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:14 vm07 bash[55244]: cluster 2026-03-09T14:39:12.524356+0000 mgr.y (mgr.44103) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:14 vm07 bash[55244]: cluster 2026-03-09T14:39:12.524356+0000 mgr.y (mgr.44103) 33 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:14 vm07 bash[55244]: audit 2026-03-09T14:39:13.280742+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:14 vm07 bash[55244]: audit 2026-03-09T14:39:13.280742+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:14 vm07 bash[55244]: audit 2026-03-09T14:39:13.286719+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:14 vm07 bash[55244]: audit 2026-03-09T14:39:13.286719+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:14 vm07 bash[55244]: audit 2026-03-09T14:39:13.806975+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:14 vm07 bash[55244]: audit 2026-03-09T14:39:13.806975+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:14 vm07 bash[55244]: audit 2026-03-09T14:39:13.811742+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:14.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:14 vm07 bash[55244]: audit 2026-03-09T14:39:13.811742+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:16.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:16 vm07 bash[55244]: cluster 2026-03-09T14:39:14.524920+0000 mgr.y (mgr.44103) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:16.656 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:16 vm07 bash[55244]: cluster 2026-03-09T14:39:14.524920+0000 mgr.y (mgr.44103) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:16.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:16 vm07 bash[56315]: cluster 2026-03-09T14:39:14.524920+0000 mgr.y (mgr.44103) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:16.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:16 vm07 bash[56315]: cluster 2026-03-09T14:39:14.524920+0000 mgr.y (mgr.44103) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:16.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:16 vm11 bash[43577]: cluster 2026-03-09T14:39:14.524920+0000 mgr.y (mgr.44103) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:16.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:16 vm11 bash[43577]: cluster 2026-03-09T14:39:14.524920+0000 mgr.y (mgr.44103) 34 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:17.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:16 vm11 bash[41290]: ts=2026-03-09T14:39:16.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:18.586 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:18 vm11 bash[43577]: cluster 2026-03-09T14:39:16.525245+0000 mgr.y (mgr.44103) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:18.587 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:18 vm11 bash[43577]: cluster 2026-03-09T14:39:16.525245+0000 mgr.y (mgr.44103) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:18.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:18 vm07 bash[55244]: cluster 2026-03-09T14:39:16.525245+0000 mgr.y (mgr.44103) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:18.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:18 vm07 bash[55244]: cluster 2026-03-09T14:39:16.525245+0000 mgr.y (mgr.44103) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:18.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:18 vm07 bash[56315]: cluster 2026-03-09T14:39:16.525245+0000 mgr.y (mgr.44103) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:18.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:18 vm07 bash[56315]: cluster 2026-03-09T14:39:16.525245+0000 mgr.y (mgr.44103) 35 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:19.585 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:19 vm11 bash[43577]: audit 2026-03-09T14:39:17.464494+0000 mgr.y (mgr.44103) 36 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:19.585 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:19 vm11 bash[43577]: audit 2026-03-09T14:39:17.464494+0000 mgr.y (mgr.44103) 36 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:19.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:19 vm07 bash[55244]: audit 2026-03-09T14:39:17.464494+0000 mgr.y (mgr.44103) 36 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:19.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:19 vm07 bash[55244]: audit 2026-03-09T14:39:17.464494+0000 mgr.y (mgr.44103) 36 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:19.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:19 vm07 bash[56315]: audit 2026-03-09T14:39:17.464494+0000 mgr.y (mgr.44103) 36 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:19.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:19 vm07 bash[56315]: audit 2026-03-09T14:39:17.464494+0000 mgr.y (mgr.44103) 36 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:20.611 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:20 vm07 bash[56315]: cluster 2026-03-09T14:39:18.525543+0000 mgr.y (mgr.44103) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:20.611 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:20 vm07 bash[56315]: cluster 2026-03-09T14:39:18.525543+0000 mgr.y (mgr.44103) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:20.611 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:20 vm07 bash[55244]: cluster 2026-03-09T14:39:18.525543+0000 mgr.y (mgr.44103) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:20.611 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:20 vm07 bash[55244]: cluster 2026-03-09T14:39:18.525543+0000 mgr.y (mgr.44103) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:20.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:20 vm11 bash[43577]: cluster 2026-03-09T14:39:18.525543+0000 mgr.y (mgr.44103) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:20.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:20 vm11 bash[43577]: cluster 2026-03-09T14:39:18.525543+0000 mgr.y (mgr.44103) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.351774+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.351774+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.355866+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.355866+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.356472+0000 mon.a (mon.0) 118 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.356472+0000 mon.a (mon.0) 118 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.356885+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.356885+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.360757+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.360757+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: cephadm 2026-03-09T14:39:20.370859+0000 mgr.y (mgr.44103) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: cephadm 2026-03-09T14:39:20.370859+0000 mgr.y (mgr.44103) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T14:39:21.349 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.370981+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.351774+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.351774+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.355866+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.355866+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.356472+0000 mon.a (mon.0) 118 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.356472+0000 mon.a (mon.0) 118 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.356885+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.356885+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.360757+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.360757+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: cephadm 2026-03-09T14:39:20.370859+0000 mgr.y (mgr.44103) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: cephadm 2026-03-09T14:39:20.370859+0000 mgr.y (mgr.44103) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.370981+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.370981+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.371309+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.371309+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: cephadm 2026-03-09T14:39:20.372280+0000 mgr.y (mgr.44103) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm07 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: cephadm 2026-03-09T14:39:20.372280+0000 mgr.y (mgr.44103) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm07 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.758741+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.758741+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.764125+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.764125+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.765269+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.765269+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.765747+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:20.765747+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:21.139918+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:21.139918+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:21.148973+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:21.148973+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:21.150941+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:21.150941+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:21.151409+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:21.151409+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:21.151820+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.350 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:21 vm07 bash[55244]: audit 2026-03-09T14:39:21.151820+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.370981+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.371309+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.371309+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: cephadm 2026-03-09T14:39:20.372280+0000 mgr.y (mgr.44103) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm07 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: cephadm 2026-03-09T14:39:20.372280+0000 mgr.y (mgr.44103) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm07 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.758741+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.758741+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.764125+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.764125+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.765269+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.765269+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.765747+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:20.765747+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:21.139918+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:21.139918+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:21.148973+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:21.148973+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:21.150941+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:21.150941+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:21.151409+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:21.151409+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:21.151820+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.610 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:21 vm07 bash[56315]: audit 2026-03-09T14:39:21.151820+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.351774+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.351774+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.355866+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.355866+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.356472+0000 mon.a (mon.0) 118 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.356472+0000 mon.a (mon.0) 118 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.356885+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.356885+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.360757+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.360757+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: cephadm 2026-03-09T14:39:20.370859+0000 mgr.y (mgr.44103) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: cephadm 2026-03-09T14:39:20.370859+0000 mgr.y (mgr.44103) 38 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.370981+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.370981+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.371309+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.371309+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: cephadm 2026-03-09T14:39:20.372280+0000 mgr.y (mgr.44103) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm07 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: cephadm 2026-03-09T14:39:20.372280+0000 mgr.y (mgr.44103) 39 : cephadm [INF] Reconfiguring daemon osd.3 on vm07 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.758741+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.758741+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.764125+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.764125+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.765269+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.765269+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.765747+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:20.765747+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:21.139918+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:21.139918+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:21.148973+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:21.148973+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:21.150941+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:21.150941+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:21.151409+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:21.151409+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:21.151820+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:21.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:21 vm11 bash[43577]: audit 2026-03-09T14:39:21.151820+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.588 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: cluster 2026-03-09T14:39:20.525998+0000 mgr.y (mgr.44103) 40 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:22.588 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: cluster 2026-03-09T14:39:20.525998+0000 mgr.y (mgr.44103) 40 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:22.588 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: cephadm 2026-03-09T14:39:20.765071+0000 mgr.y (mgr.44103) 41 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T14:39:22.588 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: cephadm 2026-03-09T14:39:20.765071+0000 mgr.y (mgr.44103) 41 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T14:39:22.588 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: cephadm 2026-03-09T14:39:20.766843+0000 mgr.y (mgr.44103) 42 : cephadm [INF] Reconfiguring daemon osd.2 on vm07 2026-03-09T14:39:22.588 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: cephadm 2026-03-09T14:39:20.766843+0000 mgr.y (mgr.44103) 42 : cephadm [INF] Reconfiguring daemon osd.2 on vm07 2026-03-09T14:39:22.588 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.149845+0000 mgr.y (mgr.44103) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T14:39:22.588 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.149845+0000 mgr.y (mgr.44103) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.152269+0000 mgr.y (mgr.44103) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.152269+0000 mgr.y (mgr.44103) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.529352+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.529352+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.535596+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.535596+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.537033+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.537033+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.537527+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.537527+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.913452+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.913452+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.919482+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.919482+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.920404+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.920404+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.922240+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.922240+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.922697+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:21.922697+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:22.279588+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:22.279588+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:22.284903+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:22.284903+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:22.285840+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:22.285840+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:22.287910+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:22 vm07 bash[55244]: audit 2026-03-09T14:39:22.287910+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: cluster 2026-03-09T14:39:20.525998+0000 mgr.y (mgr.44103) 40 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: cluster 2026-03-09T14:39:20.525998+0000 mgr.y (mgr.44103) 40 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: cephadm 2026-03-09T14:39:20.765071+0000 mgr.y (mgr.44103) 41 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: cephadm 2026-03-09T14:39:20.765071+0000 mgr.y (mgr.44103) 41 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: cephadm 2026-03-09T14:39:20.766843+0000 mgr.y (mgr.44103) 42 : cephadm [INF] Reconfiguring daemon osd.2 on vm07 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: cephadm 2026-03-09T14:39:20.766843+0000 mgr.y (mgr.44103) 42 : cephadm [INF] Reconfiguring daemon osd.2 on vm07 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.149845+0000 mgr.y (mgr.44103) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.149845+0000 mgr.y (mgr.44103) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.152269+0000 mgr.y (mgr.44103) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.152269+0000 mgr.y (mgr.44103) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.529352+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.529352+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.535596+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.535596+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.537033+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:39:22.589 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.537033+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.537527+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.537527+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.913452+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.913452+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.919482+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.919482+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.920404+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.920404+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.922240+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.922240+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.922697+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:21.922697+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:22.279588+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:22.279588+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:22.284903+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:22.284903+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:22.285840+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:22.285840+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:22.287910+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.590 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:22 vm07 bash[56315]: audit 2026-03-09T14:39:22.287910+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: cluster 2026-03-09T14:39:20.525998+0000 mgr.y (mgr.44103) 40 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:22.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: cluster 2026-03-09T14:39:20.525998+0000 mgr.y (mgr.44103) 40 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:22.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: cephadm 2026-03-09T14:39:20.765071+0000 mgr.y (mgr.44103) 41 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T14:39:22.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: cephadm 2026-03-09T14:39:20.765071+0000 mgr.y (mgr.44103) 41 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-09T14:39:22.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: cephadm 2026-03-09T14:39:20.766843+0000 mgr.y (mgr.44103) 42 : cephadm [INF] Reconfiguring daemon osd.2 on vm07 2026-03-09T14:39:22.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: cephadm 2026-03-09T14:39:20.766843+0000 mgr.y (mgr.44103) 42 : cephadm [INF] Reconfiguring daemon osd.2 on vm07 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.149845+0000 mgr.y (mgr.44103) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.149845+0000 mgr.y (mgr.44103) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.152269+0000 mgr.y (mgr.44103) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.152269+0000 mgr.y (mgr.44103) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm07 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.529352+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.529352+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.535596+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.535596+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.537033+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.537033+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.537527+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.537527+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.913452+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.913452+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.919482+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.919482+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.920404+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.920404+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.922240+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.922240+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.922697+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:21.922697+0000 mon.a (mon.0) 140 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:22.279588+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:22.279588+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:22.284903+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:22.284903+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:22.285840+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:22.285840+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:22.287910+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:22.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:22 vm11 bash[43577]: audit 2026-03-09T14:39:22.287910+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.536306+0000 mgr.y (mgr.44103) 45 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.536306+0000 mgr.y (mgr.44103) 45 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.538619+0000 mgr.y (mgr.44103) 46 : cephadm [INF] Reconfiguring daemon osd.0 on vm07 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.538619+0000 mgr.y (mgr.44103) 46 : cephadm [INF] Reconfiguring daemon osd.0 on vm07 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.920072+0000 mgr.y (mgr.44103) 47 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.920072+0000 mgr.y (mgr.44103) 47 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.923196+0000 mgr.y (mgr.44103) 48 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:21.923196+0000 mgr.y (mgr.44103) 48 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:22.285605+0000 mgr.y (mgr.44103) 49 : cephadm [INF] Reconfiguring rgw.smpl.vm07.tkkeli (monmap changed)... 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:22.285605+0000 mgr.y (mgr.44103) 49 : cephadm [INF] Reconfiguring rgw.smpl.vm07.tkkeli (monmap changed)... 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:22.288396+0000 mgr.y (mgr.44103) 50 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: cephadm 2026-03-09T14:39:22.288396+0000 mgr.y (mgr.44103) 50 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:22.571296+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:22.571296+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:22.679952+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:22.679952+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:22.685435+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:22.685435+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:22.686632+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:22.686632+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:22.687056+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:22.687056+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:23.063273+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:23.063273+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:23.068766+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:23.068766+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:23.069867+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:23.069867+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:23.070280+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:23.070280+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:23.070702+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.656 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:23 vm07 bash[56315]: audit 2026-03-09T14:39:23.070702+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:23 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:39:23] "GET /metrics HTTP/1.1" 200 37821 "" "Prometheus/2.51.0" 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.536306+0000 mgr.y (mgr.44103) 45 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.536306+0000 mgr.y (mgr.44103) 45 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.538619+0000 mgr.y (mgr.44103) 46 : cephadm [INF] Reconfiguring daemon osd.0 on vm07 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.538619+0000 mgr.y (mgr.44103) 46 : cephadm [INF] Reconfiguring daemon osd.0 on vm07 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.920072+0000 mgr.y (mgr.44103) 47 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.920072+0000 mgr.y (mgr.44103) 47 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.923196+0000 mgr.y (mgr.44103) 48 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:21.923196+0000 mgr.y (mgr.44103) 48 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:22.285605+0000 mgr.y (mgr.44103) 49 : cephadm [INF] Reconfiguring rgw.smpl.vm07.tkkeli (monmap changed)... 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:22.285605+0000 mgr.y (mgr.44103) 49 : cephadm [INF] Reconfiguring rgw.smpl.vm07.tkkeli (monmap changed)... 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:22.288396+0000 mgr.y (mgr.44103) 50 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: cephadm 2026-03-09T14:39:22.288396+0000 mgr.y (mgr.44103) 50 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:22.571296+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:22.571296+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:22.679952+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:22.679952+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:22.685435+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:22.685435+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:22.686632+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:22.686632+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:22.687056+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:22.687056+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:23.063273+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:23.063273+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:23.068766+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:23.068766+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:23.069867+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:23.069867+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:23.070280+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:23.070280+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:23.070702+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.657 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:23 vm07 bash[55244]: audit 2026-03-09T14:39:23.070702+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.536306+0000 mgr.y (mgr.44103) 45 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.536306+0000 mgr.y (mgr.44103) 45 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.538619+0000 mgr.y (mgr.44103) 46 : cephadm [INF] Reconfiguring daemon osd.0 on vm07 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.538619+0000 mgr.y (mgr.44103) 46 : cephadm [INF] Reconfiguring daemon osd.0 on vm07 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.920072+0000 mgr.y (mgr.44103) 47 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.920072+0000 mgr.y (mgr.44103) 47 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.923196+0000 mgr.y (mgr.44103) 48 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:21.923196+0000 mgr.y (mgr.44103) 48 : cephadm [INF] Reconfiguring daemon mon.a on vm07 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:22.285605+0000 mgr.y (mgr.44103) 49 : cephadm [INF] Reconfiguring rgw.smpl.vm07.tkkeli (monmap changed)... 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:22.285605+0000 mgr.y (mgr.44103) 49 : cephadm [INF] Reconfiguring rgw.smpl.vm07.tkkeli (monmap changed)... 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:22.288396+0000 mgr.y (mgr.44103) 50 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: cephadm 2026-03-09T14:39:22.288396+0000 mgr.y (mgr.44103) 50 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:22.571296+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:22.571296+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:22.679952+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:22.679952+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:22.685435+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:22.685435+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:22.686632+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:22.686632+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:22.687056+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:22.687056+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:23.063273+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:23.063273+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:23.068766+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:23.068766+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:23.069867+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:23.069867+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:23.070280+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:23.070280+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:23.070702+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:23.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:23 vm11 bash[43577]: audit 2026-03-09T14:39:23.070702+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cluster 2026-03-09T14:39:22.526308+0000 mgr.y (mgr.44103) 51 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cluster 2026-03-09T14:39:22.526308+0000 mgr.y (mgr.44103) 51 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:22.686444+0000 mgr.y (mgr.44103) 52 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:22.686444+0000 mgr.y (mgr.44103) 52 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:22.688508+0000 mgr.y (mgr.44103) 53 : cephadm [INF] Reconfiguring daemon osd.1 on vm07 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:22.688508+0000 mgr.y (mgr.44103) 53 : cephadm [INF] Reconfiguring daemon osd.1 on vm07 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.069665+0000 mgr.y (mgr.44103) 54 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.069665+0000 mgr.y (mgr.44103) 54 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.071136+0000 mgr.y (mgr.44103) 55 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.071136+0000 mgr.y (mgr.44103) 55 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.443623+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.443623+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.451690+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.451690+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.452462+0000 mgr.y (mgr.44103) 56 : cephadm [INF] Reconfiguring rgw.foo.vm07.urmgxb (monmap changed)... 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.452462+0000 mgr.y (mgr.44103) 56 : cephadm [INF] Reconfiguring rgw.foo.vm07.urmgxb (monmap changed)... 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.452678+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.452678+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.453518+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.453518+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.454016+0000 mgr.y (mgr.44103) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.454016+0000 mgr.y (mgr.44103) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.818638+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.818638+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.823715+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.823715+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.824691+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.824691+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.825109+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:23.825109+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:24.230307+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:24.230307+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:24.236765+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:24.236765+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:24.237907+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:24.237907+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:24.238322+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:24 vm11 bash[43577]: audit 2026-03-09T14:39:24.238322+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.441 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:24 vm11 bash[41290]: ts=2026-03-09T14:39:24.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cluster 2026-03-09T14:39:22.526308+0000 mgr.y (mgr.44103) 51 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cluster 2026-03-09T14:39:22.526308+0000 mgr.y (mgr.44103) 51 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:22.686444+0000 mgr.y (mgr.44103) 52 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:22.686444+0000 mgr.y (mgr.44103) 52 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:22.688508+0000 mgr.y (mgr.44103) 53 : cephadm [INF] Reconfiguring daemon osd.1 on vm07 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:22.688508+0000 mgr.y (mgr.44103) 53 : cephadm [INF] Reconfiguring daemon osd.1 on vm07 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.069665+0000 mgr.y (mgr.44103) 54 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.069665+0000 mgr.y (mgr.44103) 54 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.071136+0000 mgr.y (mgr.44103) 55 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.071136+0000 mgr.y (mgr.44103) 55 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.443623+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.443623+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.451690+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.451690+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.452462+0000 mgr.y (mgr.44103) 56 : cephadm [INF] Reconfiguring rgw.foo.vm07.urmgxb (monmap changed)... 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.452462+0000 mgr.y (mgr.44103) 56 : cephadm [INF] Reconfiguring rgw.foo.vm07.urmgxb (monmap changed)... 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.452678+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.452678+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.453518+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.453518+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.454016+0000 mgr.y (mgr.44103) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.454016+0000 mgr.y (mgr.44103) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.818638+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.818638+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.823715+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.823715+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.824691+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.824691+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.825109+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:23.825109+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:24.230307+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:24.230307+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:24.236765+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:24.236765+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:24.237907+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:24.237907+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:24.238322+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:24 vm07 bash[55244]: audit 2026-03-09T14:39:24.238322+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cluster 2026-03-09T14:39:22.526308+0000 mgr.y (mgr.44103) 51 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cluster 2026-03-09T14:39:22.526308+0000 mgr.y (mgr.44103) 51 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:22.686444+0000 mgr.y (mgr.44103) 52 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:22.686444+0000 mgr.y (mgr.44103) 52 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:22.688508+0000 mgr.y (mgr.44103) 53 : cephadm [INF] Reconfiguring daemon osd.1 on vm07 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:22.688508+0000 mgr.y (mgr.44103) 53 : cephadm [INF] Reconfiguring daemon osd.1 on vm07 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.069665+0000 mgr.y (mgr.44103) 54 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.069665+0000 mgr.y (mgr.44103) 54 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.071136+0000 mgr.y (mgr.44103) 55 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.071136+0000 mgr.y (mgr.44103) 55 : cephadm [INF] Reconfiguring daemon mgr.y on vm07 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.443623+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.443623+0000 mon.a (mon.0) 155 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.451690+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.451690+0000 mon.a (mon.0) 156 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.452462+0000 mgr.y (mgr.44103) 56 : cephadm [INF] Reconfiguring rgw.foo.vm07.urmgxb (monmap changed)... 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.452462+0000 mgr.y (mgr.44103) 56 : cephadm [INF] Reconfiguring rgw.foo.vm07.urmgxb (monmap changed)... 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.452678+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.452678+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.453518+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.453518+0000 mon.a (mon.0) 158 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.454016+0000 mgr.y (mgr.44103) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.454016+0000 mgr.y (mgr.44103) 57 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.818638+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.818638+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.823715+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.823715+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.824691+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.824691+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.825109+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:23.825109+0000 mon.a (mon.0) 162 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:24.230307+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:24.230307+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:24.236765+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:24.236765+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:24.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:24.237907+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:39:24.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:24.237907+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:39:24.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:24.238322+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:24.908 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:24 vm07 bash[56315]: audit 2026-03-09T14:39:24.238322+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.824484+0000 mgr.y (mgr.44103) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.824484+0000 mgr.y (mgr.44103) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.826217+0000 mgr.y (mgr.44103) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: cephadm 2026-03-09T14:39:23.826217+0000 mgr.y (mgr.44103) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: cephadm 2026-03-09T14:39:24.237715+0000 mgr.y (mgr.44103) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: cephadm 2026-03-09T14:39:24.237715+0000 mgr.y (mgr.44103) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: cephadm 2026-03-09T14:39:24.239925+0000 mgr.y (mgr.44103) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: cephadm 2026-03-09T14:39:24.239925+0000 mgr.y (mgr.44103) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:24.640694+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:24.640694+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:24.645475+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:24.645475+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:24.646591+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:24.646591+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:24.647058+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:24.647058+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:24.647377+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:24.647377+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:25.006688+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:25.006688+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:25.013157+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:25.013157+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:25.014356+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:25.014356+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:25.014740+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:25.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:25 vm11 bash[43577]: audit 2026-03-09T14:39:25.014740+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:25.885 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.824484+0000 mgr.y (mgr.44103) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.824484+0000 mgr.y (mgr.44103) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.826217+0000 mgr.y (mgr.44103) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: cephadm 2026-03-09T14:39:23.826217+0000 mgr.y (mgr.44103) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: cephadm 2026-03-09T14:39:24.237715+0000 mgr.y (mgr.44103) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: cephadm 2026-03-09T14:39:24.237715+0000 mgr.y (mgr.44103) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: cephadm 2026-03-09T14:39:24.239925+0000 mgr.y (mgr.44103) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: cephadm 2026-03-09T14:39:24.239925+0000 mgr.y (mgr.44103) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:24.640694+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:24.640694+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:24.645475+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:24.645475+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:24.646591+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:24.646591+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:24.647058+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:24.647058+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:24.647377+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:24.647377+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:25.006688+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:25.006688+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:25.013157+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:25.013157+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:25.014356+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:25.014356+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:25.014740+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:25 vm07 bash[55244]: audit 2026-03-09T14:39:25.014740+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.824484+0000 mgr.y (mgr.44103) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.824484+0000 mgr.y (mgr.44103) 58 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.826217+0000 mgr.y (mgr.44103) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: cephadm 2026-03-09T14:39:23.826217+0000 mgr.y (mgr.44103) 59 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: cephadm 2026-03-09T14:39:24.237715+0000 mgr.y (mgr.44103) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: cephadm 2026-03-09T14:39:24.237715+0000 mgr.y (mgr.44103) 60 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: cephadm 2026-03-09T14:39:24.239925+0000 mgr.y (mgr.44103) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: cephadm 2026-03-09T14:39:24.239925+0000 mgr.y (mgr.44103) 61 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:24.640694+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:24.640694+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:24.645475+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:24.645475+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:24.646591+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:24.646591+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:24.647058+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:24.647058+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:24.647377+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:24.647377+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:25.006688+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:25.006688+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:25.013157+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:25.013157+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:25.014356+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:25.014356+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:25.014740+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:26.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:25 vm07 bash[56315]: audit 2026-03-09T14:39:25.014740+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (107s) 27s ago 6m 14.3M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (105s) 13s ago 6m 37.3M - dad864ee21e9 614f6a00be7a 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (69s) 27s ago 6m 42.6M - 3.5 e1d6a67b021e e3b30dab288c 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 running (66s) 13s ago 9m 464M - 19.2.3-678-ge911bdeb 654f31e6858e d35dddd392d1 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (96s) 27s ago 10m 509M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (38s) 27s ago 10m 30.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bcdaa5dfc948 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (18s) 13s ago 9m 19.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1caba9bf8a13 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (52s) 27s ago 9m 35.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ff7dfe3a6c7c 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (103s) 27s ago 6m 7067k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (102s) 13s ago 6m 7231k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (9m) 27s ago 9m 51.0M 4096M 17.2.0 e1d6a67b021e 7a4a11fbf70d 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (8m) 27s ago 8m 52.9M 4096M 17.2.0 e1d6a67b021e 15e2e23b506b 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (8m) 27s ago 8m 48.4M 4096M 17.2.0 e1d6a67b021e fe41cd2240dc 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (8m) 27s ago 8m 50.7M 4096M 17.2.0 e1d6a67b021e b07b01a0b5aa 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (8m) 13s ago 8m 51.4M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (7m) 13s ago 7m 49.0M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (7m) 13s ago 7m 49.2M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (7m) 13s ago 7m 49.3M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (68s) 13s ago 6m 43.2M - 2.51.0 1d3b7f56885b e88f0339687c 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (6m) 27s ago 6m 85.5M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (6m) 13s ago 6m 84.7M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (6m) 27s ago 6m 84.7M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:39:26.283 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (6m) 13s ago 6m 84.8M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:39:26.534 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:39:26.534 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:39:26.534 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T14:39:26.534 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:39:26.534 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:39:26.534 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:39:26.534 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:39:26.534 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:39:26.534 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-09T14:39:26.535 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:39:26.535 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:39:26.535 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:39:26.535 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:39:26.535 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:39:26.535 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 12, 2026-03-09T14:39:26.535 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-09T14:39:26.535 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:39:26.535 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:39:26.711 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cluster 2026-03-09T14:39:24.526815+0000 mgr.y (mgr.44103) 62 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:26.736 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:39:26.736 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:39:26.737 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:39:26.737 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:39:26.737 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T14:39:26.737 INFO:teuthology.orchestra.run.vm07.stdout: "mgr", 2026-03-09T14:39:26.737 INFO:teuthology.orchestra.run.vm07.stdout: "mon" 2026-03-09T14:39:26.737 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:39:26.737 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "5/23 daemons upgraded", 2026-03-09T14:39:26.737 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading mon daemons", 2026-03-09T14:39:26.737 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:39:26.737 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:39:26.992 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:39:27.003 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:26 vm11 bash[41290]: ts=2026-03-09T14:39:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cluster 2026-03-09T14:39:24.526815+0000 mgr.y (mgr.44103) 62 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:24.646304+0000 mgr.y (mgr.44103) 63 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:24.646304+0000 mgr.y (mgr.44103) 63 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:24.647810+0000 mgr.y (mgr.44103) 64 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:24.647810+0000 mgr.y (mgr.44103) 64 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:25.013935+0000 mgr.y (mgr.44103) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:25.013935+0000 mgr.y (mgr.44103) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:25.015767+0000 mgr.y (mgr.44103) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:25.015767+0000 mgr.y (mgr.44103) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:25.734169+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:25.734169+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:25.741063+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:25.741063+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:25.742018+0000 mgr.y (mgr.44103) 67 : cephadm [INF] Reconfiguring rgw.foo.vm11.ncyump (monmap changed)... 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:25.742018+0000 mgr.y (mgr.44103) 67 : cephadm [INF] Reconfiguring rgw.foo.vm11.ncyump (monmap changed)... 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:25.742603+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:25.742603+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:25.744705+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:25.744705+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:25.745304+0000 mgr.y (mgr.44103) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:25.745304+0000 mgr.y (mgr.44103) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:25.880958+0000 mgr.y (mgr.44103) 69 : audit [DBG] from='client.34138 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:25.880958+0000 mgr.y (mgr.44103) 69 : audit [DBG] from='client.34138 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.083929+0000 mgr.y (mgr.44103) 70 : audit [DBG] from='client.34141 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.083929+0000 mgr.y (mgr.44103) 70 : audit [DBG] from='client.34141 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.108258+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.108258+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.112370+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.112370+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.113290+0000 mgr.y (mgr.44103) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm11.ocxkef (monmap changed)... 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.113290+0000 mgr.y (mgr.44103) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm11.ocxkef (monmap changed)... 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.113512+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.113512+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.114505+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.114505+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.114965+0000 mgr.y (mgr.44103) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.114965+0000 mgr.y (mgr.44103) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.286341+0000 mgr.y (mgr.44103) 73 : audit [DBG] from='client.34147 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.286341+0000 mgr.y (mgr.44103) 73 : audit [DBG] from='client.34147 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.503680+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.503680+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.509053+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.509053+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.510431+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.510431+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.510975+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.510975+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.511439+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.511439+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.541867+0000 mon.a (mon.0) 189 : audit [DBG] from='client.? 192.168.123.107:0/2665102640' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:27.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:26 vm11 bash[43577]: audit 2026-03-09T14:39:26.541867+0000 mon.a (mon.0) 189 : audit [DBG] from='client.? 192.168.123.107:0/2665102640' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cluster 2026-03-09T14:39:24.526815+0000 mgr.y (mgr.44103) 62 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cluster 2026-03-09T14:39:24.526815+0000 mgr.y (mgr.44103) 62 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:24.646304+0000 mgr.y (mgr.44103) 63 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:24.646304+0000 mgr.y (mgr.44103) 63 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:24.647810+0000 mgr.y (mgr.44103) 64 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:24.647810+0000 mgr.y (mgr.44103) 64 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:25.013935+0000 mgr.y (mgr.44103) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:25.013935+0000 mgr.y (mgr.44103) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:25.015767+0000 mgr.y (mgr.44103) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:25.015767+0000 mgr.y (mgr.44103) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:25.734169+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:25.734169+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:25.741063+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:25.741063+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:25.742018+0000 mgr.y (mgr.44103) 67 : cephadm [INF] Reconfiguring rgw.foo.vm11.ncyump (monmap changed)... 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:25.742018+0000 mgr.y (mgr.44103) 67 : cephadm [INF] Reconfiguring rgw.foo.vm11.ncyump (monmap changed)... 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:25.742603+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:25.742603+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:25.744705+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:25.744705+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:25.745304+0000 mgr.y (mgr.44103) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:25.745304+0000 mgr.y (mgr.44103) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:25.880958+0000 mgr.y (mgr.44103) 69 : audit [DBG] from='client.34138 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:25.880958+0000 mgr.y (mgr.44103) 69 : audit [DBG] from='client.34138 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.083929+0000 mgr.y (mgr.44103) 70 : audit [DBG] from='client.34141 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.083929+0000 mgr.y (mgr.44103) 70 : audit [DBG] from='client.34141 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.108258+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.108258+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.112370+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.112370+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.113290+0000 mgr.y (mgr.44103) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm11.ocxkef (monmap changed)... 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.113290+0000 mgr.y (mgr.44103) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm11.ocxkef (monmap changed)... 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.113512+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.113512+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.114505+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.114505+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.114965+0000 mgr.y (mgr.44103) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.114965+0000 mgr.y (mgr.44103) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.286341+0000 mgr.y (mgr.44103) 73 : audit [DBG] from='client.34147 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.286341+0000 mgr.y (mgr.44103) 73 : audit [DBG] from='client.34147 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.503680+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.503680+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.509053+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.509053+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.510431+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.510431+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.510975+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.510975+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.511439+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.511439+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.541867+0000 mon.a (mon.0) 189 : audit [DBG] from='client.? 192.168.123.107:0/2665102640' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:26 vm07 bash[55244]: audit 2026-03-09T14:39:26.541867+0000 mon.a (mon.0) 189 : audit [DBG] from='client.? 192.168.123.107:0/2665102640' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cluster 2026-03-09T14:39:24.526815+0000 mgr.y (mgr.44103) 62 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cluster 2026-03-09T14:39:24.526815+0000 mgr.y (mgr.44103) 62 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:24.646304+0000 mgr.y (mgr.44103) 63 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:24.646304+0000 mgr.y (mgr.44103) 63 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:24.647810+0000 mgr.y (mgr.44103) 64 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:24.647810+0000 mgr.y (mgr.44103) 64 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:25.013935+0000 mgr.y (mgr.44103) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:25.013935+0000 mgr.y (mgr.44103) 65 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:25.015767+0000 mgr.y (mgr.44103) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:25.015767+0000 mgr.y (mgr.44103) 66 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:25.734169+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:25.734169+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:25.741063+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:25.741063+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:25.742018+0000 mgr.y (mgr.44103) 67 : cephadm [INF] Reconfiguring rgw.foo.vm11.ncyump (monmap changed)... 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:25.742018+0000 mgr.y (mgr.44103) 67 : cephadm [INF] Reconfiguring rgw.foo.vm11.ncyump (monmap changed)... 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:25.742603+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:25.742603+0000 mon.a (mon.0) 178 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:25.744705+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:25.744705+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:25.745304+0000 mgr.y (mgr.44103) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:25.745304+0000 mgr.y (mgr.44103) 68 : cephadm [INF] Reconfiguring daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:25.880958+0000 mgr.y (mgr.44103) 69 : audit [DBG] from='client.34138 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:25.880958+0000 mgr.y (mgr.44103) 69 : audit [DBG] from='client.34138 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.083929+0000 mgr.y (mgr.44103) 70 : audit [DBG] from='client.34141 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.083929+0000 mgr.y (mgr.44103) 70 : audit [DBG] from='client.34141 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.108258+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.108258+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.112370+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.112370+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.113290+0000 mgr.y (mgr.44103) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm11.ocxkef (monmap changed)... 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.113290+0000 mgr.y (mgr.44103) 71 : cephadm [INF] Reconfiguring rgw.smpl.vm11.ocxkef (monmap changed)... 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.113512+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.113512+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.114505+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.114505+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.114965+0000 mgr.y (mgr.44103) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.114965+0000 mgr.y (mgr.44103) 72 : cephadm [INF] Reconfiguring daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.286341+0000 mgr.y (mgr.44103) 73 : audit [DBG] from='client.34147 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.286341+0000 mgr.y (mgr.44103) 73 : audit [DBG] from='client.34147 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.503680+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.503680+0000 mon.a (mon.0) 184 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.509053+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.509053+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.510431+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.510431+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.510975+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.510975+0000 mon.a (mon.0) 187 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.511439+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.511439+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.541867+0000 mon.a (mon.0) 189 : audit [DBG] from='client.? 192.168.123.107:0/2665102640' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:27.158 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:26 vm07 bash[56315]: audit 2026-03-09T14:39:26.541867+0000 mon.a (mon.0) 189 : audit [DBG] from='client.? 192.168.123.107:0/2665102640' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.510250+0000 mgr.y (mgr.44103) 74 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.510250+0000 mgr.y (mgr.44103) 74 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.511963+0000 mgr.y (mgr.44103) 75 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.511963+0000 mgr.y (mgr.44103) 75 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cluster 2026-03-09T14:39:26.527066+0000 mgr.y (mgr.44103) 76 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cluster 2026-03-09T14:39:26.527066+0000 mgr.y (mgr.44103) 76 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.743977+0000 mgr.y (mgr.44103) 77 : audit [DBG] from='client.34159 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.743977+0000 mgr.y (mgr.44103) 77 : audit [DBG] from='client.34159 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.875653+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.875653+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.880193+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.880193+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.880853+0000 mgr.y (mgr.44103) 78 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.880853+0000 mgr.y (mgr.44103) 78 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.881235+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.881235+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.881837+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.881837+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.882924+0000 mgr.y (mgr.44103) 79 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:26.882924+0000 mgr.y (mgr.44103) 79 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.999800+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.107:0/1723562859' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:26.999800+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.107:0/1723562859' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.280221+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.280221+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.284690+0000 mon.a (mon.0) 195 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.284690+0000 mon.a (mon.0) 195 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.314956+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.314956+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:28.071 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.316246+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.316246+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.317187+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.317187+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:27.317601+0000 mgr.y (mgr.44103) 80 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:27.317601+0000 mgr.y (mgr.44103) 80 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.321251+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.321251+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.326213+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.326213+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.333080+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.333080+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.339562+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.339562+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.346946+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.346946+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.349975+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.349975+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.355228+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.355228+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.357296+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.357296+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:27.357725+0000 mgr.y (mgr.44103) 81 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:27.357725+0000 mgr.y (mgr.44103) 81 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.360586+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.360586+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.363975+0000 mon.a (mon.0) 208 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.363975+0000 mon.a (mon.0) 208 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.364134+0000 mgr.y (mgr.44103) 82 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.364134+0000 mgr.y (mgr.44103) 82 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:27.364770+0000 mgr.y (mgr.44103) 83 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: cephadm 2026-03-09T14:39:27.364770+0000 mgr.y (mgr.44103) 83 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.790868+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.790868+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.794102+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.794102+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.794506+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.072 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:27 vm07 bash[55244]: audit 2026-03-09T14:39:27.794506+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.510250+0000 mgr.y (mgr.44103) 74 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.510250+0000 mgr.y (mgr.44103) 74 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.511963+0000 mgr.y (mgr.44103) 75 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.511963+0000 mgr.y (mgr.44103) 75 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cluster 2026-03-09T14:39:26.527066+0000 mgr.y (mgr.44103) 76 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cluster 2026-03-09T14:39:26.527066+0000 mgr.y (mgr.44103) 76 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.743977+0000 mgr.y (mgr.44103) 77 : audit [DBG] from='client.34159 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.743977+0000 mgr.y (mgr.44103) 77 : audit [DBG] from='client.34159 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.875653+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.875653+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.880193+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.880193+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.880853+0000 mgr.y (mgr.44103) 78 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.880853+0000 mgr.y (mgr.44103) 78 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.881235+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.881235+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.881837+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.881837+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.882924+0000 mgr.y (mgr.44103) 79 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:26.882924+0000 mgr.y (mgr.44103) 79 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.999800+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.107:0/1723562859' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:26.999800+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.107:0/1723562859' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.280221+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.280221+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.284690+0000 mon.a (mon.0) 195 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.284690+0000 mon.a (mon.0) 195 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.314956+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.314956+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.316246+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.316246+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.317187+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.317187+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:27.317601+0000 mgr.y (mgr.44103) 80 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:27.317601+0000 mgr.y (mgr.44103) 80 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.321251+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.321251+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.326213+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.326213+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.333080+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.333080+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.339562+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T14:39:28.073 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.339562+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.346946+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.346946+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.349975+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.349975+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.355228+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.355228+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.357296+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.357296+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:27.357725+0000 mgr.y (mgr.44103) 81 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:27.357725+0000 mgr.y (mgr.44103) 81 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.360586+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.360586+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.363975+0000 mon.a (mon.0) 208 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.363975+0000 mon.a (mon.0) 208 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.364134+0000 mgr.y (mgr.44103) 82 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.364134+0000 mgr.y (mgr.44103) 82 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:27.364770+0000 mgr.y (mgr.44103) 83 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: cephadm 2026-03-09T14:39:27.364770+0000 mgr.y (mgr.44103) 83 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.790868+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.790868+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.794102+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.794102+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.794506+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.074 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:27 vm07 bash[56315]: audit 2026-03-09T14:39:27.794506+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.510250+0000 mgr.y (mgr.44103) 74 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.510250+0000 mgr.y (mgr.44103) 74 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.511963+0000 mgr.y (mgr.44103) 75 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.511963+0000 mgr.y (mgr.44103) 75 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cluster 2026-03-09T14:39:26.527066+0000 mgr.y (mgr.44103) 76 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cluster 2026-03-09T14:39:26.527066+0000 mgr.y (mgr.44103) 76 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.743977+0000 mgr.y (mgr.44103) 77 : audit [DBG] from='client.34159 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.743977+0000 mgr.y (mgr.44103) 77 : audit [DBG] from='client.34159 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.875653+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.875653+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.880193+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.880193+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.880853+0000 mgr.y (mgr.44103) 78 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.880853+0000 mgr.y (mgr.44103) 78 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.881235+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.881235+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.881837+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.881837+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.882924+0000 mgr.y (mgr.44103) 79 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:26.882924+0000 mgr.y (mgr.44103) 79 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.999800+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.107:0/1723562859' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:26.999800+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.107:0/1723562859' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.280221+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.280221+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.284690+0000 mon.a (mon.0) 195 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.284690+0000 mon.a (mon.0) 195 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.314956+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.314956+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.316246+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.316246+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.317187+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.317187+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:27.317601+0000 mgr.y (mgr.44103) 80 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:27.317601+0000 mgr.y (mgr.44103) 80 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.321251+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.321251+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.326213+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.326213+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.333080+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.333080+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.339562+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.339562+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.346946+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.346946+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.349975+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.349975+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-09T14:39:28.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.355228+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.355228+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.357296+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.357296+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:27.357725+0000 mgr.y (mgr.44103) 81 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:27.357725+0000 mgr.y (mgr.44103) 81 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.360586+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.360586+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.363975+0000 mon.a (mon.0) 208 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.363975+0000 mon.a (mon.0) 208 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.364134+0000 mgr.y (mgr.44103) 82 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.364134+0000 mgr.y (mgr.44103) 82 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:27.364770+0000 mgr.y (mgr.44103) 83 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: cephadm 2026-03-09T14:39:27.364770+0000 mgr.y (mgr.44103) 83 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.790868+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.790868+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.794102+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.794102+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.794506+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:27 vm11 bash[43577]: audit 2026-03-09T14:39:27.794506+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:28.879 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:39:28 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:28.879 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:28 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:28.880 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:28 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:28.880 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:28 vm07 systemd[1]: Stopping Ceph osd.3 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:39:28.880 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:28 vm07 bash[34782]: debug 2026-03-09T14:39:28.657+0000 7f6c12ae0700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:39:28.880 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:28 vm07 bash[34782]: debug 2026-03-09T14:39:28.657+0000 7f6c12ae0700 -1 osd.3 92 *** Got signal Terminated *** 2026-03-09T14:39:28.880 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:28 vm07 bash[34782]: debug 2026-03-09T14:39:28.657+0000 7f6c12ae0700 -1 osd.3 92 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:39:28.880 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:39:28 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:28.880 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:39:28 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:28.880 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:28 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:28.880 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:28 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:28.880 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:28 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:28.880 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:39:28 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:29.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:28 vm07 bash[55244]: audit 2026-03-09T14:39:27.474091+0000 mgr.y (mgr.44103) 84 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:29.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:28 vm07 bash[55244]: audit 2026-03-09T14:39:27.474091+0000 mgr.y (mgr.44103) 84 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:29.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:28 vm07 bash[55244]: cephadm 2026-03-09T14:39:27.786586+0000 mgr.y (mgr.44103) 85 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T14:39:29.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:28 vm07 bash[55244]: cephadm 2026-03-09T14:39:27.786586+0000 mgr.y (mgr.44103) 85 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T14:39:29.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:28 vm07 bash[55244]: cephadm 2026-03-09T14:39:27.795694+0000 mgr.y (mgr.44103) 86 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T14:39:29.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:28 vm07 bash[55244]: cephadm 2026-03-09T14:39:27.795694+0000 mgr.y (mgr.44103) 86 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T14:39:29.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:28 vm07 bash[55244]: cluster 2026-03-09T14:39:28.659032+0000 mon.a (mon.0) 212 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T14:39:29.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:28 vm07 bash[55244]: cluster 2026-03-09T14:39:28.659032+0000 mon.a (mon.0) 212 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T14:39:29.156 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:28 vm07 bash[58650]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-3 2026-03-09T14:39:29.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:28 vm07 bash[56315]: audit 2026-03-09T14:39:27.474091+0000 mgr.y (mgr.44103) 84 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:29.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:28 vm07 bash[56315]: audit 2026-03-09T14:39:27.474091+0000 mgr.y (mgr.44103) 84 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:29.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:28 vm07 bash[56315]: cephadm 2026-03-09T14:39:27.786586+0000 mgr.y (mgr.44103) 85 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T14:39:29.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:28 vm07 bash[56315]: cephadm 2026-03-09T14:39:27.786586+0000 mgr.y (mgr.44103) 85 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T14:39:29.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:28 vm07 bash[56315]: cephadm 2026-03-09T14:39:27.795694+0000 mgr.y (mgr.44103) 86 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T14:39:29.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:28 vm07 bash[56315]: cephadm 2026-03-09T14:39:27.795694+0000 mgr.y (mgr.44103) 86 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T14:39:29.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:28 vm07 bash[56315]: cluster 2026-03-09T14:39:28.659032+0000 mon.a (mon.0) 212 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T14:39:29.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:28 vm07 bash[56315]: cluster 2026-03-09T14:39:28.659032+0000 mon.a (mon.0) 212 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T14:39:29.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:28 vm11 bash[43577]: audit 2026-03-09T14:39:27.474091+0000 mgr.y (mgr.44103) 84 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:29.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:28 vm11 bash[43577]: audit 2026-03-09T14:39:27.474091+0000 mgr.y (mgr.44103) 84 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:29.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:28 vm11 bash[43577]: cephadm 2026-03-09T14:39:27.786586+0000 mgr.y (mgr.44103) 85 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T14:39:29.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:28 vm11 bash[43577]: cephadm 2026-03-09T14:39:27.786586+0000 mgr.y (mgr.44103) 85 : cephadm [INF] Upgrade: Updating osd.3 2026-03-09T14:39:29.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:28 vm11 bash[43577]: cephadm 2026-03-09T14:39:27.795694+0000 mgr.y (mgr.44103) 86 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T14:39:29.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:28 vm11 bash[43577]: cephadm 2026-03-09T14:39:27.795694+0000 mgr.y (mgr.44103) 86 : cephadm [INF] Deploying daemon osd.3 on vm07 2026-03-09T14:39:29.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:28 vm11 bash[43577]: cluster 2026-03-09T14:39:28.659032+0000 mon.a (mon.0) 212 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T14:39:29.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:28 vm11 bash[43577]: cluster 2026-03-09T14:39:28.659032+0000 mon.a (mon.0) 212 : cluster [INF] osd.3 marked itself down and dead 2026-03-09T14:39:29.475 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:29.475 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:29.475 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.3.service: Deactivated successfully. 2026-03-09T14:39:29.475 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: Stopped Ceph osd.3 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:39:29.475 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:29.475 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: Started Ceph osd.3 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:39:29.475 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:29.475 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:29.475 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:29.475 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:29.476 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:29.476 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:39:29 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:29.906 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:29 vm07 bash[58857]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:39:29.906 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:29 vm07 bash[58857]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:39:30.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:29 vm11 bash[43577]: cluster 2026-03-09T14:39:28.527359+0000 mgr.y (mgr.44103) 87 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:30.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:29 vm11 bash[43577]: cluster 2026-03-09T14:39:28.527359+0000 mgr.y (mgr.44103) 87 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:30.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:29 vm11 bash[43577]: cluster 2026-03-09T14:39:28.877743+0000 mon.a (mon.0) 213 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:30.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:29 vm11 bash[43577]: cluster 2026-03-09T14:39:28.877743+0000 mon.a (mon.0) 213 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:30.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:29 vm11 bash[43577]: cluster 2026-03-09T14:39:28.905808+0000 mon.a (mon.0) 214 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-09T14:39:30.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:29 vm11 bash[43577]: cluster 2026-03-09T14:39:28.905808+0000 mon.a (mon.0) 214 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-09T14:39:30.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:29 vm11 bash[43577]: audit 2026-03-09T14:39:29.515495+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:29 vm11 bash[43577]: audit 2026-03-09T14:39:29.515495+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:29 vm11 bash[43577]: audit 2026-03-09T14:39:29.521402+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:29 vm11 bash[43577]: audit 2026-03-09T14:39:29.521402+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 bash[55244]: cluster 2026-03-09T14:39:28.527359+0000 mgr.y (mgr.44103) 87 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 bash[55244]: cluster 2026-03-09T14:39:28.527359+0000 mgr.y (mgr.44103) 87 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 bash[55244]: cluster 2026-03-09T14:39:28.877743+0000 mon.a (mon.0) 213 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 bash[55244]: cluster 2026-03-09T14:39:28.877743+0000 mon.a (mon.0) 213 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 bash[55244]: cluster 2026-03-09T14:39:28.905808+0000 mon.a (mon.0) 214 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 bash[55244]: cluster 2026-03-09T14:39:28.905808+0000 mon.a (mon.0) 214 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 bash[55244]: audit 2026-03-09T14:39:29.515495+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 bash[55244]: audit 2026-03-09T14:39:29.515495+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 bash[55244]: audit 2026-03-09T14:39:29.521402+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:29 vm07 bash[55244]: audit 2026-03-09T14:39:29.521402+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 bash[56315]: cluster 2026-03-09T14:39:28.527359+0000 mgr.y (mgr.44103) 87 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 bash[56315]: cluster 2026-03-09T14:39:28.527359+0000 mgr.y (mgr.44103) 87 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 bash[56315]: cluster 2026-03-09T14:39:28.877743+0000 mon.a (mon.0) 213 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 bash[56315]: cluster 2026-03-09T14:39:28.877743+0000 mon.a (mon.0) 213 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 bash[56315]: cluster 2026-03-09T14:39:28.905808+0000 mon.a (mon.0) 214 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 bash[56315]: cluster 2026-03-09T14:39:28.905808+0000 mon.a (mon.0) 214 : cluster [DBG] osdmap e93: 8 total, 7 up, 8 in 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 bash[56315]: audit 2026-03-09T14:39:29.515495+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 bash[56315]: audit 2026-03-09T14:39:29.515495+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 bash[56315]: audit 2026-03-09T14:39:29.521402+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:29 vm07 bash[56315]: audit 2026-03-09T14:39:29.521402+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:30.880 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:30 vm07 bash[58857]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T14:39:30.880 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:30 vm07 bash[58857]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:39:30.880 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:30 vm07 bash[58857]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:39:30.880 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:30 vm07 bash[58857]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T14:39:30.880 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:30 vm07 bash[58857]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-d1557007-4fe1-4deb-97f9-c4fc16ce9ddc/osd-block-afc54d82-66a7-42e1-83c1-0970428ef794 --path /var/lib/ceph/osd/ceph-3 --no-mon-config 2026-03-09T14:39:31.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:30 vm07 bash[55244]: cluster 2026-03-09T14:39:29.905349+0000 mon.a (mon.0) 217 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-09T14:39:31.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:30 vm07 bash[55244]: cluster 2026-03-09T14:39:29.905349+0000 mon.a (mon.0) 217 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-09T14:39:31.156 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:30 vm07 bash[58857]: Running command: /usr/bin/ln -snf /dev/ceph-d1557007-4fe1-4deb-97f9-c4fc16ce9ddc/osd-block-afc54d82-66a7-42e1-83c1-0970428ef794 /var/lib/ceph/osd/ceph-3/block 2026-03-09T14:39:31.156 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:30 vm07 bash[58857]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block 2026-03-09T14:39:31.156 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:30 vm07 bash[58857]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3 2026-03-09T14:39:31.156 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:30 vm07 bash[58857]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-09T14:39:31.156 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:30 vm07 bash[58857]: --> ceph-volume lvm activate successful for osd ID: 3 2026-03-09T14:39:31.156 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:31 vm07 bash[59212]: debug 2026-03-09T14:39:31.057+0000 7f752c5f6640 1 -- 192.168.123.107:0/4060150327 <== mon.0 v2:192.168.123.107:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x562b67471680 con 0x562b6667f800 2026-03-09T14:39:31.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:30 vm07 bash[56315]: cluster 2026-03-09T14:39:29.905349+0000 mon.a (mon.0) 217 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-09T14:39:31.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:30 vm07 bash[56315]: cluster 2026-03-09T14:39:29.905349+0000 mon.a (mon.0) 217 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-09T14:39:31.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:30 vm11 bash[43577]: cluster 2026-03-09T14:39:29.905349+0000 mon.a (mon.0) 217 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-09T14:39:31.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:30 vm11 bash[43577]: cluster 2026-03-09T14:39:29.905349+0000 mon.a (mon.0) 217 : cluster [DBG] osdmap e94: 8 total, 7 up, 8 in 2026-03-09T14:39:32.031 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:31 vm07 bash[59212]: debug 2026-03-09T14:39:31.765+0000 7f752ee60740 -1 Falling back to public interface 2026-03-09T14:39:32.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:32 vm07 bash[55244]: cluster 2026-03-09T14:39:30.527658+0000 mgr.y (mgr.44103) 88 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:32.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:32 vm07 bash[55244]: cluster 2026-03-09T14:39:30.527658+0000 mgr.y (mgr.44103) 88 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:32.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:32 vm07 bash[56315]: cluster 2026-03-09T14:39:30.527658+0000 mgr.y (mgr.44103) 88 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:32.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:32 vm07 bash[56315]: cluster 2026-03-09T14:39:30.527658+0000 mgr.y (mgr.44103) 88 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:32.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:32 vm11 bash[43577]: cluster 2026-03-09T14:39:30.527658+0000 mgr.y (mgr.44103) 88 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:32.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:32 vm11 bash[43577]: cluster 2026-03-09T14:39:30.527658+0000 mgr.y (mgr.44103) 88 : cluster [DBG] pgmap v24: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:39:33.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:33 vm07 bash[55244]: audit 2026-03-09T14:39:33.029837+0000 mon.c (mon.1) 9 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:33 vm07 bash[55244]: audit 2026-03-09T14:39:33.029837+0000 mon.c (mon.1) 9 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:33 vm07 bash[55244]: audit 2026-03-09T14:39:33.030106+0000 mon.a (mon.0) 218 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:33 vm07 bash[55244]: audit 2026-03-09T14:39:33.030106+0000 mon.a (mon.0) 218 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.405 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:32 vm07 bash[59212]: debug 2026-03-09T14:39:32.985+0000 7f752ee60740 -1 osd.3 0 read_superblock omap replica is missing. 2026-03-09T14:39:33.406 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:33 vm07 bash[59212]: debug 2026-03-09T14:39:33.021+0000 7f752ee60740 -1 osd.3 92 log_to_monitors true 2026-03-09T14:39:33.406 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:33 vm07 bash[59212]: debug 2026-03-09T14:39:33.081+0000 7f7526c0b640 -1 osd.3 92 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:39:33.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:33 vm07 bash[56315]: audit 2026-03-09T14:39:33.029837+0000 mon.c (mon.1) 9 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:33 vm07 bash[56315]: audit 2026-03-09T14:39:33.029837+0000 mon.c (mon.1) 9 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:33 vm07 bash[56315]: audit 2026-03-09T14:39:33.030106+0000 mon.a (mon.0) 218 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:33 vm07 bash[56315]: audit 2026-03-09T14:39:33.030106+0000 mon.a (mon.0) 218 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:33 vm11 bash[43577]: audit 2026-03-09T14:39:33.029837+0000 mon.c (mon.1) 9 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:33 vm11 bash[43577]: audit 2026-03-09T14:39:33.029837+0000 mon.c (mon.1) 9 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:33 vm11 bash[43577]: audit 2026-03-09T14:39:33.030106+0000 mon.a (mon.0) 218 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:33 vm11 bash[43577]: audit 2026-03-09T14:39:33.030106+0000 mon.a (mon.0) 218 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:39:33.905 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:33 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:39:33] "GET /metrics HTTP/1.1" 200 37821 "" "Prometheus/2.51.0" 2026-03-09T14:39:34.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:34 vm07 bash[55244]: cluster 2026-03-09T14:39:32.527988+0000 mgr.y (mgr.44103) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:39:34.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:34 vm07 bash[55244]: cluster 2026-03-09T14:39:32.527988+0000 mgr.y (mgr.44103) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:39:34.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:34 vm07 bash[55244]: audit 2026-03-09T14:39:33.047991+0000 mon.a (mon.0) 219 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:39:34.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:34 vm07 bash[55244]: audit 2026-03-09T14:39:33.047991+0000 mon.a (mon.0) 219 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:39:34.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:34 vm07 bash[55244]: cluster 2026-03-09T14:39:33.052385+0000 mon.a (mon.0) 220 : cluster [DBG] osdmap e95: 8 total, 7 up, 8 in 2026-03-09T14:39:34.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:34 vm07 bash[55244]: cluster 2026-03-09T14:39:33.052385+0000 mon.a (mon.0) 220 : cluster [DBG] osdmap e95: 8 total, 7 up, 8 in 2026-03-09T14:39:34.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:34 vm07 bash[55244]: audit 2026-03-09T14:39:33.056482+0000 mon.c (mon.1) 10 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:34 vm07 bash[55244]: audit 2026-03-09T14:39:33.056482+0000 mon.c (mon.1) 10 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:34 vm07 bash[55244]: audit 2026-03-09T14:39:33.056765+0000 mon.a (mon.0) 221 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:34 vm07 bash[55244]: audit 2026-03-09T14:39:33.056765+0000 mon.a (mon.0) 221 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:34 vm07 bash[56315]: cluster 2026-03-09T14:39:32.527988+0000 mgr.y (mgr.44103) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:39:34.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:34 vm07 bash[56315]: cluster 2026-03-09T14:39:32.527988+0000 mgr.y (mgr.44103) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:39:34.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:34 vm07 bash[56315]: audit 2026-03-09T14:39:33.047991+0000 mon.a (mon.0) 219 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:39:34.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:34 vm07 bash[56315]: audit 2026-03-09T14:39:33.047991+0000 mon.a (mon.0) 219 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:39:34.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:34 vm07 bash[56315]: cluster 2026-03-09T14:39:33.052385+0000 mon.a (mon.0) 220 : cluster [DBG] osdmap e95: 8 total, 7 up, 8 in 2026-03-09T14:39:34.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:34 vm07 bash[56315]: cluster 2026-03-09T14:39:33.052385+0000 mon.a (mon.0) 220 : cluster [DBG] osdmap e95: 8 total, 7 up, 8 in 2026-03-09T14:39:34.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:34 vm07 bash[56315]: audit 2026-03-09T14:39:33.056482+0000 mon.c (mon.1) 10 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:34 vm07 bash[56315]: audit 2026-03-09T14:39:33.056482+0000 mon.c (mon.1) 10 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:34 vm07 bash[56315]: audit 2026-03-09T14:39:33.056765+0000 mon.a (mon.0) 221 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:34 vm07 bash[56315]: audit 2026-03-09T14:39:33.056765+0000 mon.a (mon.0) 221 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:34 vm11 bash[41290]: ts=2026-03-09T14:39:34.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:34.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:34 vm11 bash[43577]: cluster 2026-03-09T14:39:32.527988+0000 mgr.y (mgr.44103) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:39:34.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:34 vm11 bash[43577]: cluster 2026-03-09T14:39:32.527988+0000 mgr.y (mgr.44103) 89 : cluster [DBG] pgmap v25: 161 pgs: 27 stale+active+clean, 134 active+clean; 457 KiB data, 100 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:39:34.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:34 vm11 bash[43577]: audit 2026-03-09T14:39:33.047991+0000 mon.a (mon.0) 219 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:39:34.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:34 vm11 bash[43577]: audit 2026-03-09T14:39:33.047991+0000 mon.a (mon.0) 219 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:39:34.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:34 vm11 bash[43577]: cluster 2026-03-09T14:39:33.052385+0000 mon.a (mon.0) 220 : cluster [DBG] osdmap e95: 8 total, 7 up, 8 in 2026-03-09T14:39:34.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:34 vm11 bash[43577]: cluster 2026-03-09T14:39:33.052385+0000 mon.a (mon.0) 220 : cluster [DBG] osdmap e95: 8 total, 7 up, 8 in 2026-03-09T14:39:34.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:34 vm11 bash[43577]: audit 2026-03-09T14:39:33.056482+0000 mon.c (mon.1) 10 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:34 vm11 bash[43577]: audit 2026-03-09T14:39:33.056482+0000 mon.c (mon.1) 10 : audit [INF] from='osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:34 vm11 bash[43577]: audit 2026-03-09T14:39:33.056765+0000 mon.a (mon.0) 221 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:34.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:34 vm11 bash[43577]: audit 2026-03-09T14:39:33.056765+0000 mon.a (mon.0) 221 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:35.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:35 vm07 bash[55244]: cluster 2026-03-09T14:39:34.048327+0000 mon.a (mon.0) 222 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:35.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:35 vm07 bash[55244]: cluster 2026-03-09T14:39:34.048327+0000 mon.a (mon.0) 222 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:35.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:35 vm07 bash[55244]: cluster 2026-03-09T14:39:34.048346+0000 mon.a (mon.0) 223 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:35.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:35 vm07 bash[55244]: cluster 2026-03-09T14:39:34.048346+0000 mon.a (mon.0) 223 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:35.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:35 vm07 bash[55244]: cluster 2026-03-09T14:39:34.055160+0000 mon.a (mon.0) 224 : cluster [INF] osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081] boot 2026-03-09T14:39:35.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:35 vm07 bash[55244]: cluster 2026-03-09T14:39:34.055160+0000 mon.a (mon.0) 224 : cluster [INF] osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081] boot 2026-03-09T14:39:35.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:35 vm07 bash[55244]: cluster 2026-03-09T14:39:34.055315+0000 mon.a (mon.0) 225 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T14:39:35.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:35 vm07 bash[55244]: cluster 2026-03-09T14:39:34.055315+0000 mon.a (mon.0) 225 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:35 vm07 bash[55244]: audit 2026-03-09T14:39:34.060205+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:35 vm07 bash[55244]: audit 2026-03-09T14:39:34.060205+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:35 vm07 bash[56315]: cluster 2026-03-09T14:39:34.048327+0000 mon.a (mon.0) 222 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:35 vm07 bash[56315]: cluster 2026-03-09T14:39:34.048327+0000 mon.a (mon.0) 222 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:35 vm07 bash[56315]: cluster 2026-03-09T14:39:34.048346+0000 mon.a (mon.0) 223 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:35 vm07 bash[56315]: cluster 2026-03-09T14:39:34.048346+0000 mon.a (mon.0) 223 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:35 vm07 bash[56315]: cluster 2026-03-09T14:39:34.055160+0000 mon.a (mon.0) 224 : cluster [INF] osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081] boot 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:35 vm07 bash[56315]: cluster 2026-03-09T14:39:34.055160+0000 mon.a (mon.0) 224 : cluster [INF] osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081] boot 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:35 vm07 bash[56315]: cluster 2026-03-09T14:39:34.055315+0000 mon.a (mon.0) 225 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:35 vm07 bash[56315]: cluster 2026-03-09T14:39:34.055315+0000 mon.a (mon.0) 225 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:35 vm07 bash[56315]: audit 2026-03-09T14:39:34.060205+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:39:35.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:35 vm07 bash[56315]: audit 2026-03-09T14:39:34.060205+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:39:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:35 vm11 bash[43577]: cluster 2026-03-09T14:39:34.048327+0000 mon.a (mon.0) 222 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:35 vm11 bash[43577]: cluster 2026-03-09T14:39:34.048327+0000 mon.a (mon.0) 222 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:35 vm11 bash[43577]: cluster 2026-03-09T14:39:34.048346+0000 mon.a (mon.0) 223 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:35 vm11 bash[43577]: cluster 2026-03-09T14:39:34.048346+0000 mon.a (mon.0) 223 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:35 vm11 bash[43577]: cluster 2026-03-09T14:39:34.055160+0000 mon.a (mon.0) 224 : cluster [INF] osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081] boot 2026-03-09T14:39:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:35 vm11 bash[43577]: cluster 2026-03-09T14:39:34.055160+0000 mon.a (mon.0) 224 : cluster [INF] osd.3 [v2:192.168.123.107:6826/3659027081,v1:192.168.123.107:6827/3659027081] boot 2026-03-09T14:39:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:35 vm11 bash[43577]: cluster 2026-03-09T14:39:34.055315+0000 mon.a (mon.0) 225 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T14:39:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:35 vm11 bash[43577]: cluster 2026-03-09T14:39:34.055315+0000 mon.a (mon.0) 225 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-09T14:39:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:35 vm11 bash[43577]: audit 2026-03-09T14:39:34.060205+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:39:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:35 vm11 bash[43577]: audit 2026-03-09T14:39:34.060205+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:39:36.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:36 vm07 bash[55244]: cluster 2026-03-09T14:39:34.528308+0000 mgr.y (mgr.44103) 90 : cluster [DBG] pgmap v28: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:36.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:36 vm07 bash[55244]: cluster 2026-03-09T14:39:34.528308+0000 mgr.y (mgr.44103) 90 : cluster [DBG] pgmap v28: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:36.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:36 vm07 bash[55244]: cluster 2026-03-09T14:39:35.062188+0000 mon.a (mon.0) 227 : cluster [WRN] Health check failed: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:36.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:36 vm07 bash[55244]: cluster 2026-03-09T14:39:35.062188+0000 mon.a (mon.0) 227 : cluster [WRN] Health check failed: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:36.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:36 vm07 bash[55244]: cluster 2026-03-09T14:39:35.076826+0000 mon.a (mon.0) 228 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T14:39:36.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:36 vm07 bash[55244]: cluster 2026-03-09T14:39:35.076826+0000 mon.a (mon.0) 228 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T14:39:36.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:36 vm07 bash[56315]: cluster 2026-03-09T14:39:34.528308+0000 mgr.y (mgr.44103) 90 : cluster [DBG] pgmap v28: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:36.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:36 vm07 bash[56315]: cluster 2026-03-09T14:39:34.528308+0000 mgr.y (mgr.44103) 90 : cluster [DBG] pgmap v28: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:36.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:36 vm07 bash[56315]: cluster 2026-03-09T14:39:35.062188+0000 mon.a (mon.0) 227 : cluster [WRN] Health check failed: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:36.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:36 vm07 bash[56315]: cluster 2026-03-09T14:39:35.062188+0000 mon.a (mon.0) 227 : cluster [WRN] Health check failed: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:36.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:36 vm07 bash[56315]: cluster 2026-03-09T14:39:35.076826+0000 mon.a (mon.0) 228 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T14:39:36.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:36 vm07 bash[56315]: cluster 2026-03-09T14:39:35.076826+0000 mon.a (mon.0) 228 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T14:39:36.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:36 vm11 bash[43577]: cluster 2026-03-09T14:39:34.528308+0000 mgr.y (mgr.44103) 90 : cluster [DBG] pgmap v28: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:36.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:36 vm11 bash[43577]: cluster 2026-03-09T14:39:34.528308+0000 mgr.y (mgr.44103) 90 : cluster [DBG] pgmap v28: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:36.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:36 vm11 bash[43577]: cluster 2026-03-09T14:39:35.062188+0000 mon.a (mon.0) 227 : cluster [WRN] Health check failed: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:36.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:36 vm11 bash[43577]: cluster 2026-03-09T14:39:35.062188+0000 mon.a (mon.0) 227 : cluster [WRN] Health check failed: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:36.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:36 vm11 bash[43577]: cluster 2026-03-09T14:39:35.076826+0000 mon.a (mon.0) 228 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T14:39:36.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:36 vm11 bash[43577]: cluster 2026-03-09T14:39:35.076826+0000 mon.a (mon.0) 228 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-09T14:39:37.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:36 vm11 bash[41290]: ts=2026-03-09T14:39:36.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:37 vm11 bash[43577]: audit 2026-03-09T14:39:36.501262+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:37 vm11 bash[43577]: audit 2026-03-09T14:39:36.501262+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:37 vm11 bash[43577]: audit 2026-03-09T14:39:36.507857+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:37 vm11 bash[43577]: audit 2026-03-09T14:39:36.507857+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:37 vm11 bash[43577]: cluster 2026-03-09T14:39:36.534388+0000 mgr.y (mgr.44103) 91 : cluster [DBG] pgmap v30: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:37 vm11 bash[43577]: cluster 2026-03-09T14:39:36.534388+0000 mgr.y (mgr.44103) 91 : cluster [DBG] pgmap v30: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:37 vm11 bash[43577]: audit 2026-03-09T14:39:37.094285+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:37 vm11 bash[43577]: audit 2026-03-09T14:39:37.094285+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:37 vm11 bash[43577]: audit 2026-03-09T14:39:37.101012+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:37 vm11 bash[43577]: audit 2026-03-09T14:39:37.101012+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:37 vm07 bash[55244]: audit 2026-03-09T14:39:36.501262+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:37 vm07 bash[55244]: audit 2026-03-09T14:39:36.501262+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:37 vm07 bash[55244]: audit 2026-03-09T14:39:36.507857+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:37 vm07 bash[55244]: audit 2026-03-09T14:39:36.507857+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:37 vm07 bash[55244]: cluster 2026-03-09T14:39:36.534388+0000 mgr.y (mgr.44103) 91 : cluster [DBG] pgmap v30: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:37 vm07 bash[55244]: cluster 2026-03-09T14:39:36.534388+0000 mgr.y (mgr.44103) 91 : cluster [DBG] pgmap v30: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:37 vm07 bash[55244]: audit 2026-03-09T14:39:37.094285+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:37 vm07 bash[55244]: audit 2026-03-09T14:39:37.094285+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:37 vm07 bash[55244]: audit 2026-03-09T14:39:37.101012+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:37 vm07 bash[55244]: audit 2026-03-09T14:39:37.101012+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:37 vm07 bash[56315]: audit 2026-03-09T14:39:36.501262+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:37 vm07 bash[56315]: audit 2026-03-09T14:39:36.501262+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:37 vm07 bash[56315]: audit 2026-03-09T14:39:36.507857+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:37 vm07 bash[56315]: audit 2026-03-09T14:39:36.507857+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:37 vm07 bash[56315]: cluster 2026-03-09T14:39:36.534388+0000 mgr.y (mgr.44103) 91 : cluster [DBG] pgmap v30: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:37 vm07 bash[56315]: cluster 2026-03-09T14:39:36.534388+0000 mgr.y (mgr.44103) 91 : cluster [DBG] pgmap v30: 161 pgs: 40 active+undersized, 26 active+undersized+degraded, 95 active+clean; 457 KiB data, 121 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:37 vm07 bash[56315]: audit 2026-03-09T14:39:37.094285+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:37 vm07 bash[56315]: audit 2026-03-09T14:39:37.094285+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:37 vm07 bash[56315]: audit 2026-03-09T14:39:37.101012+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:37.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:37 vm07 bash[56315]: audit 2026-03-09T14:39:37.101012+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:38 vm07 bash[55244]: audit 2026-03-09T14:39:37.481992+0000 mgr.y (mgr.44103) 92 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:38 vm07 bash[55244]: audit 2026-03-09T14:39:37.481992+0000 mgr.y (mgr.44103) 92 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:38.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:38 vm07 bash[55244]: audit 2026-03-09T14:39:37.575016+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:38.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:38 vm07 bash[55244]: audit 2026-03-09T14:39:37.575016+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:38.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:38 vm07 bash[55244]: audit 2026-03-09T14:39:37.576229+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:38.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:38 vm07 bash[55244]: audit 2026-03-09T14:39:37.576229+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:38.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:38 vm07 bash[56315]: audit 2026-03-09T14:39:37.481992+0000 mgr.y (mgr.44103) 92 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:38.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:38 vm07 bash[56315]: audit 2026-03-09T14:39:37.481992+0000 mgr.y (mgr.44103) 92 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:38.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:38 vm07 bash[56315]: audit 2026-03-09T14:39:37.575016+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:38.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:38 vm07 bash[56315]: audit 2026-03-09T14:39:37.575016+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:38.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:38 vm07 bash[56315]: audit 2026-03-09T14:39:37.576229+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:38.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:38 vm07 bash[56315]: audit 2026-03-09T14:39:37.576229+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:39.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:38 vm11 bash[43577]: audit 2026-03-09T14:39:37.481992+0000 mgr.y (mgr.44103) 92 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:39.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:38 vm11 bash[43577]: audit 2026-03-09T14:39:37.481992+0000 mgr.y (mgr.44103) 92 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:39.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:38 vm11 bash[43577]: audit 2026-03-09T14:39:37.575016+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:39.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:38 vm11 bash[43577]: audit 2026-03-09T14:39:37.575016+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:39.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:38 vm11 bash[43577]: audit 2026-03-09T14:39:37.576229+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:39.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:38 vm11 bash[43577]: audit 2026-03-09T14:39:37.576229+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:39.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:39 vm07 bash[56315]: cluster 2026-03-09T14:39:38.534766+0000 mgr.y (mgr.44103) 93 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:39.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:39 vm07 bash[56315]: cluster 2026-03-09T14:39:38.534766+0000 mgr.y (mgr.44103) 93 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:39.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:39 vm07 bash[56315]: cluster 2026-03-09T14:39:38.573479+0000 mon.a (mon.0) 235 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded) 2026-03-09T14:39:39.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:39 vm07 bash[56315]: cluster 2026-03-09T14:39:38.573479+0000 mon.a (mon.0) 235 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded) 2026-03-09T14:39:39.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:39 vm07 bash[56315]: cluster 2026-03-09T14:39:38.573498+0000 mon.a (mon.0) 236 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:39.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:39 vm07 bash[56315]: cluster 2026-03-09T14:39:38.573498+0000 mon.a (mon.0) 236 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:39.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:39 vm07 bash[55244]: cluster 2026-03-09T14:39:38.534766+0000 mgr.y (mgr.44103) 93 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:39.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:39 vm07 bash[55244]: cluster 2026-03-09T14:39:38.534766+0000 mgr.y (mgr.44103) 93 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:39.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:39 vm07 bash[55244]: cluster 2026-03-09T14:39:38.573479+0000 mon.a (mon.0) 235 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded) 2026-03-09T14:39:39.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:39 vm07 bash[55244]: cluster 2026-03-09T14:39:38.573479+0000 mon.a (mon.0) 235 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded) 2026-03-09T14:39:39.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:39 vm07 bash[55244]: cluster 2026-03-09T14:39:38.573498+0000 mon.a (mon.0) 236 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:39.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:39 vm07 bash[55244]: cluster 2026-03-09T14:39:38.573498+0000 mon.a (mon.0) 236 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:40.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:39 vm11 bash[43577]: cluster 2026-03-09T14:39:38.534766+0000 mgr.y (mgr.44103) 93 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:40.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:39 vm11 bash[43577]: cluster 2026-03-09T14:39:38.534766+0000 mgr.y (mgr.44103) 93 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:40.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:39 vm11 bash[43577]: cluster 2026-03-09T14:39:38.573479+0000 mon.a (mon.0) 235 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded) 2026-03-09T14:39:40.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:39 vm11 bash[43577]: cluster 2026-03-09T14:39:38.573479+0000 mon.a (mon.0) 235 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 81/627 objects degraded (12.919%), 26 pgs degraded) 2026-03-09T14:39:40.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:39 vm11 bash[43577]: cluster 2026-03-09T14:39:38.573498+0000 mon.a (mon.0) 236 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:40.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:39 vm11 bash[43577]: cluster 2026-03-09T14:39:38.573498+0000 mon.a (mon.0) 236 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:41.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:41 vm07 bash[56315]: cluster 2026-03-09T14:39:40.535093+0000 mgr.y (mgr.44103) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:41.907 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:41 vm07 bash[56315]: cluster 2026-03-09T14:39:40.535093+0000 mgr.y (mgr.44103) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:41.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:41 vm07 bash[55244]: cluster 2026-03-09T14:39:40.535093+0000 mgr.y (mgr.44103) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:41.907 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:41 vm07 bash[55244]: cluster 2026-03-09T14:39:40.535093+0000 mgr.y (mgr.44103) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:41 vm11 bash[43577]: cluster 2026-03-09T14:39:40.535093+0000 mgr.y (mgr.44103) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:41 vm11 bash[43577]: cluster 2026-03-09T14:39:40.535093+0000 mgr.y (mgr.44103) 94 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:43.900 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:43 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:39:43] "GET /metrics HTTP/1.1" 200 37744 "" "Prometheus/2.51.0" 2026-03-09T14:39:43.901 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:43 vm07 bash[55244]: cluster 2026-03-09T14:39:42.535384+0000 mgr.y (mgr.44103) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:43.901 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:43 vm07 bash[55244]: cluster 2026-03-09T14:39:42.535384+0000 mgr.y (mgr.44103) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:43.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:43 vm07 bash[56315]: cluster 2026-03-09T14:39:42.535384+0000 mgr.y (mgr.44103) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:43.901 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:43 vm07 bash[56315]: cluster 2026-03-09T14:39:42.535384+0000 mgr.y (mgr.44103) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:43 vm11 bash[43577]: cluster 2026-03-09T14:39:42.535384+0000 mgr.y (mgr.44103) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:43 vm11 bash[43577]: cluster 2026-03-09T14:39:42.535384+0000 mgr.y (mgr.44103) 95 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:39:44.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:44 vm11 bash[41290]: ts=2026-03-09T14:39:44.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:44.878 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:39:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.610282+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.610282+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.614493+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.614493+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.615252+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.615252+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.615779+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.615779+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.619735+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.619735+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.658502+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.658502+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.659474+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.659474+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.660137+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.660137+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.660638+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.660638+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.661204+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.661204+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.661366+0000 mgr.y (mgr.44103) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:43.661366+0000 mgr.y (mgr.44103) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: cephadm 2026-03-09T14:39:43.661867+0000 mgr.y (mgr.44103) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: cephadm 2026-03-09T14:39:43.661867+0000 mgr.y (mgr.44103) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: cephadm 2026-03-09T14:39:44.060074+0000 mgr.y (mgr.44103) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: cephadm 2026-03-09T14:39:44.060074+0000 mgr.y (mgr.44103) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:44.064032+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:44.064032+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:44.069309+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:44.069309+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:44.069872+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: audit 2026-03-09T14:39:44.069872+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: cephadm 2026-03-09T14:39:44.071116+0000 mgr.y (mgr.44103) 99 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 bash[55244]: cephadm 2026-03-09T14:39:44.071116+0000 mgr.y (mgr.44103) 99 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T14:39:44.878 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:44.879 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.610282+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.610282+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.614493+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.614493+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.615252+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.615252+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.615779+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.615779+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.619735+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.619735+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.658502+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.658502+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.659474+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.659474+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.660137+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.660137+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.660638+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.660638+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.661204+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.661204+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:39:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:44.879 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:44.879 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:39:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.661366+0000 mgr.y (mgr.44103) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:43.661366+0000 mgr.y (mgr.44103) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: cephadm 2026-03-09T14:39:43.661867+0000 mgr.y (mgr.44103) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: cephadm 2026-03-09T14:39:43.661867+0000 mgr.y (mgr.44103) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: cephadm 2026-03-09T14:39:44.060074+0000 mgr.y (mgr.44103) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: cephadm 2026-03-09T14:39:44.060074+0000 mgr.y (mgr.44103) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:44.064032+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:44.064032+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:44.069309+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:44.069309+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:44.069872+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: audit 2026-03-09T14:39:44.069872+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: cephadm 2026-03-09T14:39:44.071116+0000 mgr.y (mgr.44103) 99 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 bash[56315]: cephadm 2026-03-09T14:39:44.071116+0000 mgr.y (mgr.44103) 99 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T14:39:44.879 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:44.879 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:39:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:44.880 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:45.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.610282+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:45.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.610282+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:45.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.614493+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:45.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.614493+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:45.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.615252+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.615252+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.615779+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.615779+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.619735+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.619735+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.658502+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.658502+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.659474+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.659474+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.660137+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.660137+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.660638+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.660638+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.661204+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.661204+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.661366+0000 mgr.y (mgr.44103) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:43.661366+0000 mgr.y (mgr.44103) 96 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: cephadm 2026-03-09T14:39:43.661867+0000 mgr.y (mgr.44103) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: cephadm 2026-03-09T14:39:43.661867+0000 mgr.y (mgr.44103) 97 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: cephadm 2026-03-09T14:39:44.060074+0000 mgr.y (mgr.44103) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: cephadm 2026-03-09T14:39:44.060074+0000 mgr.y (mgr.44103) 98 : cephadm [INF] Upgrade: Updating osd.2 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:44.064032+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:44.064032+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:44.069309+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:44.069309+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:44.069872+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: audit 2026-03-09T14:39:44.069872+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: cephadm 2026-03-09T14:39:44.071116+0000 mgr.y (mgr.44103) 99 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T14:39:45.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:44 vm11 bash[43577]: cephadm 2026-03-09T14:39:44.071116+0000 mgr.y (mgr.44103) 99 : cephadm [INF] Deploying daemon osd.2 on vm07 2026-03-09T14:39:45.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:44 vm07 systemd[1]: Stopping Ceph osd.2 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:39:45.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:44 vm07 bash[31564]: debug 2026-03-09T14:39:44.921+0000 7f289b6fa700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:39:45.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:44 vm07 bash[31564]: debug 2026-03-09T14:39:44.921+0000 7f289b6fa700 -1 osd.2 97 *** Got signal Terminated *** 2026-03-09T14:39:45.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:44 vm07 bash[31564]: debug 2026-03-09T14:39:44.921+0000 7f289b6fa700 -1 osd.2 97 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:39:45.905 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:45 vm07 bash[60697]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-2 2026-03-09T14:39:45.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:45 vm07 bash[55244]: cluster 2026-03-09T14:39:44.535966+0000 mgr.y (mgr.44103) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:45.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:45 vm07 bash[55244]: cluster 2026-03-09T14:39:44.535966+0000 mgr.y (mgr.44103) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:45.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:45 vm07 bash[55244]: cluster 2026-03-09T14:39:44.922609+0000 mon.a (mon.0) 250 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T14:39:45.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:45 vm07 bash[55244]: cluster 2026-03-09T14:39:44.922609+0000 mon.a (mon.0) 250 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T14:39:45.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:45 vm07 bash[56315]: cluster 2026-03-09T14:39:44.535966+0000 mgr.y (mgr.44103) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:45.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:45 vm07 bash[56315]: cluster 2026-03-09T14:39:44.535966+0000 mgr.y (mgr.44103) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:45.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:45 vm07 bash[56315]: cluster 2026-03-09T14:39:44.922609+0000 mon.a (mon.0) 250 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T14:39:45.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:45 vm07 bash[56315]: cluster 2026-03-09T14:39:44.922609+0000 mon.a (mon.0) 250 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T14:39:46.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:45 vm11 bash[43577]: cluster 2026-03-09T14:39:44.535966+0000 mgr.y (mgr.44103) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:46.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:45 vm11 bash[43577]: cluster 2026-03-09T14:39:44.535966+0000 mgr.y (mgr.44103) 100 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:46.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:45 vm11 bash[43577]: cluster 2026-03-09T14:39:44.922609+0000 mon.a (mon.0) 250 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T14:39:46.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:45 vm11 bash[43577]: cluster 2026-03-09T14:39:44.922609+0000 mon.a (mon.0) 250 : cluster [INF] osd.2 marked itself down and dead 2026-03-09T14:39:46.218 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.2.service: Deactivated successfully. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: Stopped Ceph osd.2 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: Started Ceph osd.2 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:46.219 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:39:46 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:39:47.003 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:46 vm11 bash[41290]: ts=2026-03-09T14:39:46.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:47.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:46 vm11 bash[43577]: cluster 2026-03-09T14:39:45.612967+0000 mon.a (mon.0) 251 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:47.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:46 vm11 bash[43577]: cluster 2026-03-09T14:39:45.612967+0000 mon.a (mon.0) 251 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:47.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:46 vm11 bash[43577]: cluster 2026-03-09T14:39:45.626919+0000 mon.a (mon.0) 252 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-09T14:39:47.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:46 vm11 bash[43577]: cluster 2026-03-09T14:39:45.626919+0000 mon.a (mon.0) 252 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-09T14:39:47.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:46 vm11 bash[43577]: audit 2026-03-09T14:39:46.558470+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:46 vm11 bash[43577]: audit 2026-03-09T14:39:46.558470+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:46 vm07 bash[55244]: cluster 2026-03-09T14:39:45.612967+0000 mon.a (mon.0) 251 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:46 vm07 bash[55244]: cluster 2026-03-09T14:39:45.612967+0000 mon.a (mon.0) 251 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:46 vm07 bash[55244]: cluster 2026-03-09T14:39:45.626919+0000 mon.a (mon.0) 252 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:46 vm07 bash[55244]: cluster 2026-03-09T14:39:45.626919+0000 mon.a (mon.0) 252 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:46 vm07 bash[55244]: audit 2026-03-09T14:39:46.558470+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:46 vm07 bash[55244]: audit 2026-03-09T14:39:46.558470+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:46 vm07 bash[56315]: cluster 2026-03-09T14:39:45.612967+0000 mon.a (mon.0) 251 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:46 vm07 bash[56315]: cluster 2026-03-09T14:39:45.612967+0000 mon.a (mon.0) 251 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:46 vm07 bash[56315]: cluster 2026-03-09T14:39:45.626919+0000 mon.a (mon.0) 252 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:46 vm07 bash[56315]: cluster 2026-03-09T14:39:45.626919+0000 mon.a (mon.0) 252 : cluster [DBG] osdmap e98: 8 total, 7 up, 8 in 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:46 vm07 bash[56315]: audit 2026-03-09T14:39:46.558470+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:46 vm07 bash[56315]: audit 2026-03-09T14:39:46.558470+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:46 vm07 bash[60906]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:39:47.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:46 vm07 bash[60906]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:39:47.833 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: cluster 2026-03-09T14:39:46.536224+0000 mgr.y (mgr.44103) 101 : cluster [DBG] pgmap v36: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:47.833 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: cluster 2026-03-09T14:39:46.536224+0000 mgr.y (mgr.44103) 101 : cluster [DBG] pgmap v36: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:47.833 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: audit 2026-03-09T14:39:46.710715+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.833 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: audit 2026-03-09T14:39:46.710715+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: cluster 2026-03-09T14:39:46.752464+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: cluster 2026-03-09T14:39:46.752464+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: audit 2026-03-09T14:39:47.070240+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: audit 2026-03-09T14:39:47.070240+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: audit 2026-03-09T14:39:47.077017+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: audit 2026-03-09T14:39:47.077017+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: audit 2026-03-09T14:39:47.580334+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:47 vm07 bash[55244]: audit 2026-03-09T14:39:47.580334+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: cluster 2026-03-09T14:39:46.536224+0000 mgr.y (mgr.44103) 101 : cluster [DBG] pgmap v36: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: cluster 2026-03-09T14:39:46.536224+0000 mgr.y (mgr.44103) 101 : cluster [DBG] pgmap v36: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: audit 2026-03-09T14:39:46.710715+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: audit 2026-03-09T14:39:46.710715+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: cluster 2026-03-09T14:39:46.752464+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: cluster 2026-03-09T14:39:46.752464+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: audit 2026-03-09T14:39:47.070240+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: audit 2026-03-09T14:39:47.070240+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: audit 2026-03-09T14:39:47.077017+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: audit 2026-03-09T14:39:47.077017+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: audit 2026-03-09T14:39:47.580334+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:47.834 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:47 vm07 bash[56315]: audit 2026-03-09T14:39:47.580334+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: cluster 2026-03-09T14:39:46.536224+0000 mgr.y (mgr.44103) 101 : cluster [DBG] pgmap v36: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: cluster 2026-03-09T14:39:46.536224+0000 mgr.y (mgr.44103) 101 : cluster [DBG] pgmap v36: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 122 MiB used, 160 GiB / 160 GiB avail; 409 B/s rd, 0 op/s 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: audit 2026-03-09T14:39:46.710715+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: audit 2026-03-09T14:39:46.710715+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: cluster 2026-03-09T14:39:46.752464+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: cluster 2026-03-09T14:39:46.752464+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e99: 8 total, 7 up, 8 in 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: audit 2026-03-09T14:39:47.070240+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: audit 2026-03-09T14:39:47.070240+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: audit 2026-03-09T14:39:47.077017+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: audit 2026-03-09T14:39:47.077017+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: audit 2026-03-09T14:39:47.580334+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:48.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:47 vm11 bash[43577]: audit 2026-03-09T14:39:47.580334+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:48.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:47 vm07 bash[60906]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T14:39:48.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:47 vm07 bash[60906]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:39:48.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:47 vm07 bash[60906]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:39:48.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:47 vm07 bash[60906]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T14:39:48.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:47 vm07 bash[60906]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-56c8c7b8-78c1-4623-b1ba-4b5765cc5629/osd-block-6878f209-d828-467d-8a66-6cca096732a5 --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-09T14:39:48.655 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:48 vm07 bash[60906]: Running command: /usr/bin/ln -snf /dev/ceph-56c8c7b8-78c1-4623-b1ba-4b5765cc5629/osd-block-6878f209-d828-467d-8a66-6cca096732a5 /var/lib/ceph/osd/ceph-2/block 2026-03-09T14:39:48.655 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:48 vm07 bash[60906]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-09T14:39:48.655 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:48 vm07 bash[60906]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T14:39:48.655 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:48 vm07 bash[60906]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-09T14:39:48.655 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:48 vm07 bash[60906]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-09T14:39:48.655 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:48 vm07 bash[61284]: debug 2026-03-09T14:39:48.361+0000 7ff0eb33a640 1 -- 192.168.123.107:0/3747176161 <== mon.0 v2:192.168.123.107:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x556bc708f680 con 0x556bc629dc00 2026-03-09T14:39:49.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:48 vm11 bash[43577]: audit 2026-03-09T14:39:47.490447+0000 mgr.y (mgr.44103) 102 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:49.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:48 vm11 bash[43577]: audit 2026-03-09T14:39:47.490447+0000 mgr.y (mgr.44103) 102 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:49.034 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:48 vm07 bash[56315]: audit 2026-03-09T14:39:47.490447+0000 mgr.y (mgr.44103) 102 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:49.034 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:48 vm07 bash[56315]: audit 2026-03-09T14:39:47.490447+0000 mgr.y (mgr.44103) 102 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:49.034 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:48 vm07 bash[55244]: audit 2026-03-09T14:39:47.490447+0000 mgr.y (mgr.44103) 102 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:49.034 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:48 vm07 bash[55244]: audit 2026-03-09T14:39:47.490447+0000 mgr.y (mgr.44103) 102 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:49.405 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:49 vm07 bash[61284]: debug 2026-03-09T14:39:49.041+0000 7ff0edba4740 -1 Falling back to public interface 2026-03-09T14:39:50.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:49 vm11 bash[43577]: cluster 2026-03-09T14:39:48.536643+0000 mgr.y (mgr.44103) 103 : cluster [DBG] pgmap v38: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:50.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:49 vm11 bash[43577]: cluster 2026-03-09T14:39:48.536643+0000 mgr.y (mgr.44103) 103 : cluster [DBG] pgmap v38: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:50.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:49 vm11 bash[43577]: cluster 2026-03-09T14:39:48.721275+0000 mon.a (mon.0) 259 : cluster [WRN] Health check failed: Degraded data redundancy: 55/627 objects degraded (8.772%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:50.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:49 vm11 bash[43577]: cluster 2026-03-09T14:39:48.721275+0000 mon.a (mon.0) 259 : cluster [WRN] Health check failed: Degraded data redundancy: 55/627 objects degraded (8.772%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:50.005 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:49 vm07 bash[55244]: cluster 2026-03-09T14:39:48.536643+0000 mgr.y (mgr.44103) 103 : cluster [DBG] pgmap v38: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:50.006 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:49 vm07 bash[55244]: cluster 2026-03-09T14:39:48.536643+0000 mgr.y (mgr.44103) 103 : cluster [DBG] pgmap v38: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:50.006 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:49 vm07 bash[55244]: cluster 2026-03-09T14:39:48.721275+0000 mon.a (mon.0) 259 : cluster [WRN] Health check failed: Degraded data redundancy: 55/627 objects degraded (8.772%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:50.006 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:49 vm07 bash[55244]: cluster 2026-03-09T14:39:48.721275+0000 mon.a (mon.0) 259 : cluster [WRN] Health check failed: Degraded data redundancy: 55/627 objects degraded (8.772%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:50.006 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:49 vm07 bash[56315]: cluster 2026-03-09T14:39:48.536643+0000 mgr.y (mgr.44103) 103 : cluster [DBG] pgmap v38: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:50.006 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:49 vm07 bash[56315]: cluster 2026-03-09T14:39:48.536643+0000 mgr.y (mgr.44103) 103 : cluster [DBG] pgmap v38: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:50.006 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:49 vm07 bash[56315]: cluster 2026-03-09T14:39:48.721275+0000 mon.a (mon.0) 259 : cluster [WRN] Health check failed: Degraded data redundancy: 55/627 objects degraded (8.772%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:50.006 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:49 vm07 bash[56315]: cluster 2026-03-09T14:39:48.721275+0000 mon.a (mon.0) 259 : cluster [WRN] Health check failed: Degraded data redundancy: 55/627 objects degraded (8.772%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:50.405 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:50 vm07 bash[61284]: debug 2026-03-09T14:39:50.013+0000 7ff0edba4740 -1 osd.2 0 read_superblock omap replica is missing. 2026-03-09T14:39:50.405 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:50 vm07 bash[61284]: debug 2026-03-09T14:39:50.021+0000 7ff0edba4740 -1 osd.2 97 log_to_monitors true 2026-03-09T14:39:51.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:50 vm11 bash[43577]: audit 2026-03-09T14:39:50.027939+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:50 vm11 bash[43577]: audit 2026-03-09T14:39:50.027939+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:50 vm11 bash[43577]: audit 2026-03-09T14:39:50.028241+0000 mon.a (mon.0) 260 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:50 vm11 bash[43577]: audit 2026-03-09T14:39:50.028241+0000 mon.a (mon.0) 260 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:50 vm07 bash[56315]: audit 2026-03-09T14:39:50.027939+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:50 vm07 bash[56315]: audit 2026-03-09T14:39:50.027939+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:50 vm07 bash[56315]: audit 2026-03-09T14:39:50.028241+0000 mon.a (mon.0) 260 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:50 vm07 bash[56315]: audit 2026-03-09T14:39:50.028241+0000 mon.a (mon.0) 260 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:50 vm07 bash[55244]: audit 2026-03-09T14:39:50.027939+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:50 vm07 bash[55244]: audit 2026-03-09T14:39:50.027939+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:50 vm07 bash[55244]: audit 2026-03-09T14:39:50.028241+0000 mon.a (mon.0) 260 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:50 vm07 bash[55244]: audit 2026-03-09T14:39:50.028241+0000 mon.a (mon.0) 260 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T14:39:51.905 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:39:51 vm07 bash[61284]: debug 2026-03-09T14:39:51.629+0000 7ff0e514e640 -1 osd.2 97 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:39:51.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:51 vm07 bash[55244]: cluster 2026-03-09T14:39:50.536971+0000 mgr.y (mgr.44103) 104 : cluster [DBG] pgmap v39: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:51.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:51 vm07 bash[55244]: cluster 2026-03-09T14:39:50.536971+0000 mgr.y (mgr.44103) 104 : cluster [DBG] pgmap v39: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:51.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:51 vm07 bash[55244]: audit 2026-03-09T14:39:50.742870+0000 mon.a (mon.0) 261 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:39:51.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:51 vm07 bash[55244]: audit 2026-03-09T14:39:50.742870+0000 mon.a (mon.0) 261 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:39:51.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:51 vm07 bash[55244]: audit 2026-03-09T14:39:50.748927+0000 mon.c (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:51.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:51 vm07 bash[55244]: audit 2026-03-09T14:39:50.748927+0000 mon.c (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:51.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:51 vm07 bash[55244]: cluster 2026-03-09T14:39:50.749280+0000 mon.a (mon.0) 262 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-09T14:39:51.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:51 vm07 bash[55244]: cluster 2026-03-09T14:39:50.749280+0000 mon.a (mon.0) 262 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:51 vm07 bash[55244]: audit 2026-03-09T14:39:50.749987+0000 mon.a (mon.0) 263 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:51 vm07 bash[55244]: audit 2026-03-09T14:39:50.749987+0000 mon.a (mon.0) 263 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:51 vm07 bash[56315]: cluster 2026-03-09T14:39:50.536971+0000 mgr.y (mgr.44103) 104 : cluster [DBG] pgmap v39: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:51 vm07 bash[56315]: cluster 2026-03-09T14:39:50.536971+0000 mgr.y (mgr.44103) 104 : cluster [DBG] pgmap v39: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:51 vm07 bash[56315]: audit 2026-03-09T14:39:50.742870+0000 mon.a (mon.0) 261 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:51 vm07 bash[56315]: audit 2026-03-09T14:39:50.742870+0000 mon.a (mon.0) 261 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:51 vm07 bash[56315]: audit 2026-03-09T14:39:50.748927+0000 mon.c (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:51 vm07 bash[56315]: audit 2026-03-09T14:39:50.748927+0000 mon.c (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:51 vm07 bash[56315]: cluster 2026-03-09T14:39:50.749280+0000 mon.a (mon.0) 262 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:51 vm07 bash[56315]: cluster 2026-03-09T14:39:50.749280+0000 mon.a (mon.0) 262 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:51 vm07 bash[56315]: audit 2026-03-09T14:39:50.749987+0000 mon.a (mon.0) 263 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:51.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:51 vm07 bash[56315]: audit 2026-03-09T14:39:50.749987+0000 mon.a (mon.0) 263 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:52.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:51 vm11 bash[43577]: cluster 2026-03-09T14:39:50.536971+0000 mgr.y (mgr.44103) 104 : cluster [DBG] pgmap v39: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:52.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:51 vm11 bash[43577]: cluster 2026-03-09T14:39:50.536971+0000 mgr.y (mgr.44103) 104 : cluster [DBG] pgmap v39: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 123 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:52.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:51 vm11 bash[43577]: audit 2026-03-09T14:39:50.742870+0000 mon.a (mon.0) 261 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:39:52.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:51 vm11 bash[43577]: audit 2026-03-09T14:39:50.742870+0000 mon.a (mon.0) 261 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T14:39:52.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:51 vm11 bash[43577]: audit 2026-03-09T14:39:50.748927+0000 mon.c (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:52.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:51 vm11 bash[43577]: audit 2026-03-09T14:39:50.748927+0000 mon.c (mon.1) 12 : audit [INF] from='osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:52.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:51 vm11 bash[43577]: cluster 2026-03-09T14:39:50.749280+0000 mon.a (mon.0) 262 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-09T14:39:52.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:51 vm11 bash[43577]: cluster 2026-03-09T14:39:50.749280+0000 mon.a (mon.0) 262 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-09T14:39:52.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:51 vm11 bash[43577]: audit 2026-03-09T14:39:50.749987+0000 mon.a (mon.0) 263 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:52.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:51 vm11 bash[43577]: audit 2026-03-09T14:39:50.749987+0000 mon.a (mon.0) 263 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:39:53.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:52 vm07 bash[55244]: cluster 2026-03-09T14:39:51.743025+0000 mon.a (mon.0) 264 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:53.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:52 vm07 bash[55244]: cluster 2026-03-09T14:39:51.743025+0000 mon.a (mon.0) 264 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:53.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:52 vm07 bash[55244]: cluster 2026-03-09T14:39:51.748319+0000 mon.a (mon.0) 265 : cluster [INF] osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038] boot 2026-03-09T14:39:53.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:52 vm07 bash[55244]: cluster 2026-03-09T14:39:51.748319+0000 mon.a (mon.0) 265 : cluster [INF] osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038] boot 2026-03-09T14:39:53.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:52 vm07 bash[55244]: cluster 2026-03-09T14:39:51.748428+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T14:39:53.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:52 vm07 bash[55244]: cluster 2026-03-09T14:39:51.748428+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T14:39:53.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:52 vm07 bash[55244]: audit 2026-03-09T14:39:51.749280+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:39:53.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:52 vm07 bash[55244]: audit 2026-03-09T14:39:51.749280+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:39:53.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:52 vm07 bash[55244]: audit 2026-03-09T14:39:52.575612+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:53.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:52 vm07 bash[55244]: audit 2026-03-09T14:39:52.575612+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:53.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:52 vm07 bash[56315]: cluster 2026-03-09T14:39:51.743025+0000 mon.a (mon.0) 264 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:53.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:52 vm07 bash[56315]: cluster 2026-03-09T14:39:51.743025+0000 mon.a (mon.0) 264 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:53.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:52 vm07 bash[56315]: cluster 2026-03-09T14:39:51.748319+0000 mon.a (mon.0) 265 : cluster [INF] osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038] boot 2026-03-09T14:39:53.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:52 vm07 bash[56315]: cluster 2026-03-09T14:39:51.748319+0000 mon.a (mon.0) 265 : cluster [INF] osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038] boot 2026-03-09T14:39:53.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:52 vm07 bash[56315]: cluster 2026-03-09T14:39:51.748428+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T14:39:53.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:52 vm07 bash[56315]: cluster 2026-03-09T14:39:51.748428+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T14:39:53.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:52 vm07 bash[56315]: audit 2026-03-09T14:39:51.749280+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:39:53.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:52 vm07 bash[56315]: audit 2026-03-09T14:39:51.749280+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:39:53.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:52 vm07 bash[56315]: audit 2026-03-09T14:39:52.575612+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:53.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:52 vm07 bash[56315]: audit 2026-03-09T14:39:52.575612+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:53.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:52 vm11 bash[43577]: cluster 2026-03-09T14:39:51.743025+0000 mon.a (mon.0) 264 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:53.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:52 vm11 bash[43577]: cluster 2026-03-09T14:39:51.743025+0000 mon.a (mon.0) 264 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:39:53.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:52 vm11 bash[43577]: cluster 2026-03-09T14:39:51.748319+0000 mon.a (mon.0) 265 : cluster [INF] osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038] boot 2026-03-09T14:39:53.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:52 vm11 bash[43577]: cluster 2026-03-09T14:39:51.748319+0000 mon.a (mon.0) 265 : cluster [INF] osd.2 [v2:192.168.123.107:6818/2453058038,v1:192.168.123.107:6819/2453058038] boot 2026-03-09T14:39:53.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:52 vm11 bash[43577]: cluster 2026-03-09T14:39:51.748428+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T14:39:53.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:52 vm11 bash[43577]: cluster 2026-03-09T14:39:51.748428+0000 mon.a (mon.0) 266 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-09T14:39:53.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:52 vm11 bash[43577]: audit 2026-03-09T14:39:51.749280+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:39:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:52 vm11 bash[43577]: audit 2026-03-09T14:39:51.749280+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:39:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:52 vm11 bash[43577]: audit 2026-03-09T14:39:52.575612+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:53.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:52 vm11 bash[43577]: audit 2026-03-09T14:39:52.575612+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:39:53.760 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:39:53 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:39:53] "GET /metrics HTTP/1.1" 200 37760 "" "Prometheus/2.51.0" 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: cluster 2026-03-09T14:39:51.621052+0000 osd.2 (osd.2) 1 : cluster [WRN] OSD bench result of 29124.626724 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: cluster 2026-03-09T14:39:51.621052+0000 osd.2 (osd.2) 1 : cluster [WRN] OSD bench result of 29124.626724 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: cluster 2026-03-09T14:39:52.537259+0000 mgr.y (mgr.44103) 105 : cluster [DBG] pgmap v42: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 142 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: cluster 2026-03-09T14:39:52.537259+0000 mgr.y (mgr.44103) 105 : cluster [DBG] pgmap v42: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 142 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: cluster 2026-03-09T14:39:52.774907+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: cluster 2026-03-09T14:39:52.774907+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: audit 2026-03-09T14:39:53.008297+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: audit 2026-03-09T14:39:53.008297+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: audit 2026-03-09T14:39:53.015454+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: audit 2026-03-09T14:39:53.015454+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: audit 2026-03-09T14:39:53.647982+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: audit 2026-03-09T14:39:53.647982+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: audit 2026-03-09T14:39:53.656284+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:53 vm11 bash[43577]: audit 2026-03-09T14:39:53.656284+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: cluster 2026-03-09T14:39:51.621052+0000 osd.2 (osd.2) 1 : cluster [WRN] OSD bench result of 29124.626724 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: cluster 2026-03-09T14:39:51.621052+0000 osd.2 (osd.2) 1 : cluster [WRN] OSD bench result of 29124.626724 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: cluster 2026-03-09T14:39:52.537259+0000 mgr.y (mgr.44103) 105 : cluster [DBG] pgmap v42: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 142 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: cluster 2026-03-09T14:39:52.537259+0000 mgr.y (mgr.44103) 105 : cluster [DBG] pgmap v42: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 142 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: cluster 2026-03-09T14:39:52.774907+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: cluster 2026-03-09T14:39:52.774907+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: audit 2026-03-09T14:39:53.008297+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: audit 2026-03-09T14:39:53.008297+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: audit 2026-03-09T14:39:53.015454+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: audit 2026-03-09T14:39:53.015454+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: audit 2026-03-09T14:39:53.647982+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: audit 2026-03-09T14:39:53.647982+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: audit 2026-03-09T14:39:53.656284+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:53 vm07 bash[55244]: audit 2026-03-09T14:39:53.656284+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: cluster 2026-03-09T14:39:51.621052+0000 osd.2 (osd.2) 1 : cluster [WRN] OSD bench result of 29124.626724 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: cluster 2026-03-09T14:39:51.621052+0000 osd.2 (osd.2) 1 : cluster [WRN] OSD bench result of 29124.626724 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: cluster 2026-03-09T14:39:52.537259+0000 mgr.y (mgr.44103) 105 : cluster [DBG] pgmap v42: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 142 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:54.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: cluster 2026-03-09T14:39:52.537259+0000 mgr.y (mgr.44103) 105 : cluster [DBG] pgmap v42: 161 pgs: 33 active+undersized, 13 active+undersized+degraded, 115 active+clean; 457 KiB data, 142 MiB used, 160 GiB / 160 GiB avail; 55/627 objects degraded (8.772%) 2026-03-09T14:39:54.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: cluster 2026-03-09T14:39:52.774907+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T14:39:54.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: cluster 2026-03-09T14:39:52.774907+0000 mon.a (mon.0) 269 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-09T14:39:54.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: audit 2026-03-09T14:39:53.008297+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: audit 2026-03-09T14:39:53.008297+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: audit 2026-03-09T14:39:53.015454+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: audit 2026-03-09T14:39:53.015454+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: audit 2026-03-09T14:39:53.647982+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: audit 2026-03-09T14:39:53.647982+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: audit 2026-03-09T14:39:53.656284+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:53 vm07 bash[56315]: audit 2026-03-09T14:39:53.656284+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:39:54.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:54 vm11 bash[41290]: ts=2026-03-09T14:39:54.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:55.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:54 vm07 bash[55244]: cluster 2026-03-09T14:39:54.766822+0000 mon.a (mon.0) 274 : cluster [WRN] Health check update: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:55.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:54 vm07 bash[55244]: cluster 2026-03-09T14:39:54.766822+0000 mon.a (mon.0) 274 : cluster [WRN] Health check update: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:55.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:54 vm07 bash[56315]: cluster 2026-03-09T14:39:54.766822+0000 mon.a (mon.0) 274 : cluster [WRN] Health check update: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:55.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:54 vm07 bash[56315]: cluster 2026-03-09T14:39:54.766822+0000 mon.a (mon.0) 274 : cluster [WRN] Health check update: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:55.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:54 vm11 bash[43577]: cluster 2026-03-09T14:39:54.766822+0000 mon.a (mon.0) 274 : cluster [WRN] Health check update: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:55.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:54 vm11 bash[43577]: cluster 2026-03-09T14:39:54.766822+0000 mon.a (mon.0) 274 : cluster [WRN] Health check update: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded (PG_DEGRADED) 2026-03-09T14:39:56.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:55 vm07 bash[55244]: cluster 2026-03-09T14:39:54.537777+0000 mgr.y (mgr.44103) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:39:56.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:55 vm07 bash[55244]: cluster 2026-03-09T14:39:54.537777+0000 mgr.y (mgr.44103) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:39:56.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:55 vm07 bash[56315]: cluster 2026-03-09T14:39:54.537777+0000 mgr.y (mgr.44103) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:39:56.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:55 vm07 bash[56315]: cluster 2026-03-09T14:39:54.537777+0000 mgr.y (mgr.44103) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:39:56.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:55 vm11 bash[43577]: cluster 2026-03-09T14:39:54.537777+0000 mgr.y (mgr.44103) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:39:56.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:55 vm11 bash[43577]: cluster 2026-03-09T14:39:54.537777+0000 mgr.y (mgr.44103) 106 : cluster [DBG] pgmap v44: 161 pgs: 11 active+undersized, 2 active+undersized+degraded, 148 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:39:57.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:56 vm07 bash[55244]: cluster 2026-03-09T14:39:56.822162+0000 mon.a (mon.0) 275 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded) 2026-03-09T14:39:57.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:56 vm07 bash[55244]: cluster 2026-03-09T14:39:56.822162+0000 mon.a (mon.0) 275 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded) 2026-03-09T14:39:57.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:56 vm07 bash[55244]: cluster 2026-03-09T14:39:56.822187+0000 mon.a (mon.0) 276 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:57.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:56 vm07 bash[55244]: cluster 2026-03-09T14:39:56.822187+0000 mon.a (mon.0) 276 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:57.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:56 vm07 bash[56315]: cluster 2026-03-09T14:39:56.822162+0000 mon.a (mon.0) 275 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded) 2026-03-09T14:39:57.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:56 vm07 bash[56315]: cluster 2026-03-09T14:39:56.822162+0000 mon.a (mon.0) 275 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded) 2026-03-09T14:39:57.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:56 vm07 bash[56315]: cluster 2026-03-09T14:39:56.822187+0000 mon.a (mon.0) 276 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:57.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:56 vm07 bash[56315]: cluster 2026-03-09T14:39:56.822187+0000 mon.a (mon.0) 276 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:57.209 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:39:57.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:39:56 vm11 bash[41290]: ts=2026-03-09T14:39:56.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:39:57.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:56 vm11 bash[43577]: cluster 2026-03-09T14:39:56.822162+0000 mon.a (mon.0) 275 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded) 2026-03-09T14:39:57.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:56 vm11 bash[43577]: cluster 2026-03-09T14:39:56.822162+0000 mon.a (mon.0) 275 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 2/627 objects degraded (0.319%), 2 pgs degraded) 2026-03-09T14:39:57.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:56 vm11 bash[43577]: cluster 2026-03-09T14:39:56.822187+0000 mon.a (mon.0) 276 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:57.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:56 vm11 bash[43577]: cluster 2026-03-09T14:39:56.822187+0000 mon.a (mon.0) 276 : cluster [INF] Cluster is now healthy 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (2m) 4s ago 7m 14.3M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (2m) 44s ago 7m 37.3M - dad864ee21e9 614f6a00be7a 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (100s) 4s ago 6m 42.8M - 3.5 e1d6a67b021e e3b30dab288c 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 running (98s) 44s ago 9m 464M - 19.2.3-678-ge911bdeb 654f31e6858e d35dddd392d1 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (2m) 4s ago 10m 524M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (69s) 4s ago 10m 41.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bcdaa5dfc948 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (50s) 44s ago 10m 19.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1caba9bf8a13 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (83s) 4s ago 10m 40.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ff7dfe3a6c7c 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (2m) 4s ago 7m 7220k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (2m) 44s ago 7m 7231k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (9m) 4s ago 9m 52.0M 4096M 17.2.0 e1d6a67b021e 7a4a11fbf70d 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (9m) 4s ago 9m 54.0M 4096M 17.2.0 e1d6a67b021e 15e2e23b506b 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (9s) 4s ago 9m 12.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7d943c2f091c 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (26s) 4s ago 8m 45.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7c234b83449a 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (8m) 44s ago 8m 51.4M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (8m) 44s ago 8m 49.0M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (8m) 44s ago 8m 49.2M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (7m) 44s ago 7m 49.3M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (99s) 44s ago 7m 43.2M - 2.51.0 1d3b7f56885b e88f0339687c 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (6m) 4s ago 6m 85.7M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (6m) 44s ago 6m 84.7M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (6m) 4s ago 6m 84.9M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:39:57.605 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (6m) 44s ago 6m 84.8M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:39:57.835 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 6, 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 10, 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:39:57.836 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout: "mgr", 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout: "mon" 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "7/23 daemons upgraded", 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:39:58.041 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:39:58.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:57 vm07 bash[55244]: cluster 2026-03-09T14:39:56.538124+0000 mgr.y (mgr.44103) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:58.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:57 vm07 bash[55244]: cluster 2026-03-09T14:39:56.538124+0000 mgr.y (mgr.44103) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:58.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:57 vm07 bash[55244]: audit 2026-03-09T14:39:57.206229+0000 mgr.y (mgr.44103) 108 : audit [DBG] from='client.44202 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:57 vm07 bash[55244]: audit 2026-03-09T14:39:57.206229+0000 mgr.y (mgr.44103) 108 : audit [DBG] from='client.44202 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:57 vm07 bash[55244]: audit 2026-03-09T14:39:57.410986+0000 mgr.y (mgr.44103) 109 : audit [DBG] from='client.44208 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:57 vm07 bash[55244]: audit 2026-03-09T14:39:57.410986+0000 mgr.y (mgr.44103) 109 : audit [DBG] from='client.44208 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:57 vm07 bash[55244]: audit 2026-03-09T14:39:57.843735+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.107:0/1139714508' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:58.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:57 vm07 bash[55244]: audit 2026-03-09T14:39:57.843735+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.107:0/1139714508' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:58.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:57 vm07 bash[56315]: cluster 2026-03-09T14:39:56.538124+0000 mgr.y (mgr.44103) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:58.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:57 vm07 bash[56315]: cluster 2026-03-09T14:39:56.538124+0000 mgr.y (mgr.44103) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:58.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:57 vm07 bash[56315]: audit 2026-03-09T14:39:57.206229+0000 mgr.y (mgr.44103) 108 : audit [DBG] from='client.44202 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:57 vm07 bash[56315]: audit 2026-03-09T14:39:57.206229+0000 mgr.y (mgr.44103) 108 : audit [DBG] from='client.44202 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:57 vm07 bash[56315]: audit 2026-03-09T14:39:57.410986+0000 mgr.y (mgr.44103) 109 : audit [DBG] from='client.44208 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:57 vm07 bash[56315]: audit 2026-03-09T14:39:57.410986+0000 mgr.y (mgr.44103) 109 : audit [DBG] from='client.44208 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:57 vm07 bash[56315]: audit 2026-03-09T14:39:57.843735+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.107:0/1139714508' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:58.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:57 vm07 bash[56315]: audit 2026-03-09T14:39:57.843735+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.107:0/1139714508' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:58.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:57 vm11 bash[43577]: cluster 2026-03-09T14:39:56.538124+0000 mgr.y (mgr.44103) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:58.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:57 vm11 bash[43577]: cluster 2026-03-09T14:39:56.538124+0000 mgr.y (mgr.44103) 107 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:39:58.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:57 vm11 bash[43577]: audit 2026-03-09T14:39:57.206229+0000 mgr.y (mgr.44103) 108 : audit [DBG] from='client.44202 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:57 vm11 bash[43577]: audit 2026-03-09T14:39:57.206229+0000 mgr.y (mgr.44103) 108 : audit [DBG] from='client.44202 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:57 vm11 bash[43577]: audit 2026-03-09T14:39:57.410986+0000 mgr.y (mgr.44103) 109 : audit [DBG] from='client.44208 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:57 vm11 bash[43577]: audit 2026-03-09T14:39:57.410986+0000 mgr.y (mgr.44103) 109 : audit [DBG] from='client.44208 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:58.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:57 vm11 bash[43577]: audit 2026-03-09T14:39:57.843735+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.107:0/1139714508' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:58.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:57 vm11 bash[43577]: audit 2026-03-09T14:39:57.843735+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.107:0/1139714508' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:39:58.287 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:39:59.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:58 vm11 bash[43577]: audit 2026-03-09T14:39:57.498139+0000 mgr.y (mgr.44103) 110 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:59.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:58 vm11 bash[43577]: audit 2026-03-09T14:39:57.498139+0000 mgr.y (mgr.44103) 110 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:59.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:58 vm11 bash[43577]: audit 2026-03-09T14:39:57.608518+0000 mgr.y (mgr.44103) 111 : audit [DBG] from='client.44214 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:58 vm11 bash[43577]: audit 2026-03-09T14:39:57.608518+0000 mgr.y (mgr.44103) 111 : audit [DBG] from='client.44214 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:58 vm11 bash[43577]: audit 2026-03-09T14:39:58.049357+0000 mgr.y (mgr.44103) 112 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:58 vm11 bash[43577]: audit 2026-03-09T14:39:58.049357+0000 mgr.y (mgr.44103) 112 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:58 vm11 bash[43577]: audit 2026-03-09T14:39:58.294973+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.107:0/3544554227' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:59.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:58 vm11 bash[43577]: audit 2026-03-09T14:39:58.294973+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.107:0/3544554227' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:58 vm07 bash[55244]: audit 2026-03-09T14:39:57.498139+0000 mgr.y (mgr.44103) 110 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:58 vm07 bash[55244]: audit 2026-03-09T14:39:57.498139+0000 mgr.y (mgr.44103) 110 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:58 vm07 bash[55244]: audit 2026-03-09T14:39:57.608518+0000 mgr.y (mgr.44103) 111 : audit [DBG] from='client.44214 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:58 vm07 bash[55244]: audit 2026-03-09T14:39:57.608518+0000 mgr.y (mgr.44103) 111 : audit [DBG] from='client.44214 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:58 vm07 bash[55244]: audit 2026-03-09T14:39:58.049357+0000 mgr.y (mgr.44103) 112 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:58 vm07 bash[55244]: audit 2026-03-09T14:39:58.049357+0000 mgr.y (mgr.44103) 112 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:58 vm07 bash[55244]: audit 2026-03-09T14:39:58.294973+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.107:0/3544554227' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:58 vm07 bash[55244]: audit 2026-03-09T14:39:58.294973+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.107:0/3544554227' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:58 vm07 bash[56315]: audit 2026-03-09T14:39:57.498139+0000 mgr.y (mgr.44103) 110 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:58 vm07 bash[56315]: audit 2026-03-09T14:39:57.498139+0000 mgr.y (mgr.44103) 110 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:58 vm07 bash[56315]: audit 2026-03-09T14:39:57.608518+0000 mgr.y (mgr.44103) 111 : audit [DBG] from='client.44214 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:58 vm07 bash[56315]: audit 2026-03-09T14:39:57.608518+0000 mgr.y (mgr.44103) 111 : audit [DBG] from='client.44214 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:58 vm07 bash[56315]: audit 2026-03-09T14:39:58.049357+0000 mgr.y (mgr.44103) 112 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:58 vm07 bash[56315]: audit 2026-03-09T14:39:58.049357+0000 mgr.y (mgr.44103) 112 : audit [DBG] from='client.44226 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:58 vm07 bash[56315]: audit 2026-03-09T14:39:58.294973+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.107:0/3544554227' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:39:59.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:58 vm07 bash[56315]: audit 2026-03-09T14:39:58.294973+0000 mon.c (mon.1) 13 : audit [DBG] from='client.? 192.168.123.107:0/3544554227' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:40:00.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:59 vm11 bash[43577]: cluster 2026-03-09T14:39:58.538522+0000 mgr.y (mgr.44103) 113 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:40:00.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:39:59 vm11 bash[43577]: cluster 2026-03-09T14:39:58.538522+0000 mgr.y (mgr.44103) 113 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:40:00.301 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:59 vm07 bash[55244]: cluster 2026-03-09T14:39:58.538522+0000 mgr.y (mgr.44103) 113 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:40:00.301 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:39:59 vm07 bash[55244]: cluster 2026-03-09T14:39:58.538522+0000 mgr.y (mgr.44103) 113 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:40:00.301 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:59 vm07 bash[56315]: cluster 2026-03-09T14:39:58.538522+0000 mgr.y (mgr.44103) 113 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:40:00.301 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:39:59 vm07 bash[56315]: cluster 2026-03-09T14:39:58.538522+0000 mgr.y (mgr.44103) 113 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-09T14:40:00.935 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: cluster 2026-03-09T14:40:00.000112+0000 mon.a (mon.0) 278 : cluster [INF] overall HEALTH_OK 2026-03-09T14:40:01.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: cluster 2026-03-09T14:40:00.000112+0000 mon.a (mon.0) 278 : cluster [INF] overall HEALTH_OK 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: cluster 2026-03-09T14:40:00.000112+0000 mon.a (mon.0) 278 : cluster [INF] overall HEALTH_OK 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.367324+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.367324+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.373788+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.373788+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.374659+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.374659+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.375139+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.375139+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.379615+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.379615+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.420085+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.420085+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.421175+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.421175+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.421899+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.421899+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.422419+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.422419+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.423022+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.423022+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.423215+0000 mgr.y (mgr.44103) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.423215+0000 mgr.y (mgr.44103) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: cephadm 2026-03-09T14:40:00.423706+0000 mgr.y (mgr.44103) 115 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: cephadm 2026-03-09T14:40:00.423706+0000 mgr.y (mgr.44103) 115 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.852826+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.852826+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.856171+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.856171+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.857006+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:00 vm11 bash[43577]: audit 2026-03-09T14:40:00.857006+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.398 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: cluster 2026-03-09T14:40:00.000112+0000 mon.a (mon.0) 278 : cluster [INF] overall HEALTH_OK 2026-03-09T14:40:01.398 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.367324+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.398 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.367324+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.398 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.373788+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.398 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.373788+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.398 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.374659+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.374659+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.375139+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.375139+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.379615+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.379615+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.420085+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.420085+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.421175+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.421175+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.421899+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.421899+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.422419+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.422419+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.423022+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.423022+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.423215+0000 mgr.y (mgr.44103) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.423215+0000 mgr.y (mgr.44103) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: cephadm 2026-03-09T14:40:00.423706+0000 mgr.y (mgr.44103) 115 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: cephadm 2026-03-09T14:40:00.423706+0000 mgr.y (mgr.44103) 115 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.852826+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.852826+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.856171+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.856171+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.857006+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:00 vm07 bash[56315]: audit 2026-03-09T14:40:00.857006+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: cluster 2026-03-09T14:40:00.000112+0000 mon.a (mon.0) 278 : cluster [INF] overall HEALTH_OK 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: cluster 2026-03-09T14:40:00.000112+0000 mon.a (mon.0) 278 : cluster [INF] overall HEALTH_OK 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.367324+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.367324+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.373788+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.373788+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.374659+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.374659+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.375139+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.375139+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.379615+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.379615+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.420085+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.420085+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.421175+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.421175+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.421899+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.421899+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.422419+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.422419+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.423022+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.399 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.423022+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.400 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.423215+0000 mgr.y (mgr.44103) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.400 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.423215+0000 mgr.y (mgr.44103) 114 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-09T14:40:01.400 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: cephadm 2026-03-09T14:40:00.423706+0000 mgr.y (mgr.44103) 115 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T14:40:01.400 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: cephadm 2026-03-09T14:40:00.423706+0000 mgr.y (mgr.44103) 115 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-09T14:40:01.400 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.852826+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.400 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.852826+0000 mon.a (mon.0) 289 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:01.400 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.856171+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:40:01.400 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.856171+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T14:40:01.400 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.857006+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.400 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:00 vm07 bash[55244]: audit 2026-03-09T14:40:00.857006+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:01.956 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:01.956 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:40:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:01.956 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:01.957 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:40:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:01.957 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:40:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:01.959 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:01.959 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:01.959 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:01 vm07 systemd[1]: Stopping Ceph osd.0 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:40:01.959 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:40:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:01.959 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:40:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:02.380 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:02 vm07 bash[25297]: debug 2026-03-09T14:40:01.993+0000 7f52f4939700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:40:02.380 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:02 vm07 bash[25297]: debug 2026-03-09T14:40:01.993+0000 7f52f4939700 -1 osd.0 102 *** Got signal Terminated *** 2026-03-09T14:40:02.380 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:02 vm07 bash[25297]: debug 2026-03-09T14:40:01.993+0000 7f52f4939700 -1 osd.0 102 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:40:02.655 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:02 vm07 bash[62906]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-0 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:02 vm07 bash[56315]: cluster 2026-03-09T14:40:00.538807+0000 mgr.y (mgr.44103) 116 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:02 vm07 bash[56315]: cluster 2026-03-09T14:40:00.538807+0000 mgr.y (mgr.44103) 116 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:02 vm07 bash[56315]: cephadm 2026-03-09T14:40:00.847533+0000 mgr.y (mgr.44103) 117 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:02 vm07 bash[56315]: cephadm 2026-03-09T14:40:00.847533+0000 mgr.y (mgr.44103) 117 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:02 vm07 bash[56315]: cephadm 2026-03-09T14:40:00.858728+0000 mgr.y (mgr.44103) 118 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:02 vm07 bash[56315]: cephadm 2026-03-09T14:40:00.858728+0000 mgr.y (mgr.44103) 118 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:02 vm07 bash[56315]: cluster 2026-03-09T14:40:01.999715+0000 mon.a (mon.0) 292 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:02 vm07 bash[56315]: cluster 2026-03-09T14:40:01.999715+0000 mon.a (mon.0) 292 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:02 vm07 bash[55244]: cluster 2026-03-09T14:40:00.538807+0000 mgr.y (mgr.44103) 116 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:02 vm07 bash[55244]: cluster 2026-03-09T14:40:00.538807+0000 mgr.y (mgr.44103) 116 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:02 vm07 bash[55244]: cephadm 2026-03-09T14:40:00.847533+0000 mgr.y (mgr.44103) 117 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:02 vm07 bash[55244]: cephadm 2026-03-09T14:40:00.847533+0000 mgr.y (mgr.44103) 117 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:02 vm07 bash[55244]: cephadm 2026-03-09T14:40:00.858728+0000 mgr.y (mgr.44103) 118 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:02 vm07 bash[55244]: cephadm 2026-03-09T14:40:00.858728+0000 mgr.y (mgr.44103) 118 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:02 vm07 bash[55244]: cluster 2026-03-09T14:40:01.999715+0000 mon.a (mon.0) 292 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T14:40:02.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:02 vm07 bash[55244]: cluster 2026-03-09T14:40:01.999715+0000 mon.a (mon.0) 292 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T14:40:02.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:02 vm11 bash[43577]: cluster 2026-03-09T14:40:00.538807+0000 mgr.y (mgr.44103) 116 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:02.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:02 vm11 bash[43577]: cluster 2026-03-09T14:40:00.538807+0000 mgr.y (mgr.44103) 116 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:02.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:02 vm11 bash[43577]: cephadm 2026-03-09T14:40:00.847533+0000 mgr.y (mgr.44103) 117 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T14:40:02.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:02 vm11 bash[43577]: cephadm 2026-03-09T14:40:00.847533+0000 mgr.y (mgr.44103) 117 : cephadm [INF] Upgrade: Updating osd.0 2026-03-09T14:40:02.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:02 vm11 bash[43577]: cephadm 2026-03-09T14:40:00.858728+0000 mgr.y (mgr.44103) 118 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T14:40:02.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:02 vm11 bash[43577]: cephadm 2026-03-09T14:40:00.858728+0000 mgr.y (mgr.44103) 118 : cephadm [INF] Deploying daemon osd.0 on vm07 2026-03-09T14:40:02.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:02 vm11 bash[43577]: cluster 2026-03-09T14:40:01.999715+0000 mon.a (mon.0) 292 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T14:40:02.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:02 vm11 bash[43577]: cluster 2026-03-09T14:40:01.999715+0000 mon.a (mon.0) 292 : cluster [INF] osd.0 marked itself down and dead 2026-03-09T14:40:03.052 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:03.052 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:03.052 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:03.052 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.0.service: Deactivated successfully. 2026-03-09T14:40:03.052 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: Stopped Ceph osd.0 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:40:03.052 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:03.052 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:03.052 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:03.052 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:03.052 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:03.052 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:40:02 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:03.386 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:03 vm07 systemd[1]: Started Ceph osd.0 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:40:03.386 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:03 vm07 bash[63111]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:03.387 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:03 vm07 bash[63111]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:03 vm07 bash[55244]: cluster 2026-03-09T14:40:02.386515+0000 mon.a (mon.0) 293 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:03 vm07 bash[55244]: cluster 2026-03-09T14:40:02.386515+0000 mon.a (mon.0) 293 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:03 vm07 bash[55244]: cluster 2026-03-09T14:40:02.401075+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:03 vm07 bash[55244]: cluster 2026-03-09T14:40:02.401075+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:03 vm07 bash[55244]: audit 2026-03-09T14:40:02.597638+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:03 vm07 bash[55244]: audit 2026-03-09T14:40:02.597638+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:03 vm07 bash[55244]: audit 2026-03-09T14:40:03.092507+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:03 vm07 bash[55244]: audit 2026-03-09T14:40:03.092507+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:03 vm07 bash[55244]: audit 2026-03-09T14:40:03.101089+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:03 vm07 bash[55244]: audit 2026-03-09T14:40:03.101089+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:03 vm07 bash[56315]: cluster 2026-03-09T14:40:02.386515+0000 mon.a (mon.0) 293 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:03 vm07 bash[56315]: cluster 2026-03-09T14:40:02.386515+0000 mon.a (mon.0) 293 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:03 vm07 bash[56315]: cluster 2026-03-09T14:40:02.401075+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:03 vm07 bash[56315]: cluster 2026-03-09T14:40:02.401075+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:03 vm07 bash[56315]: audit 2026-03-09T14:40:02.597638+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:03 vm07 bash[56315]: audit 2026-03-09T14:40:02.597638+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:03 vm07 bash[56315]: audit 2026-03-09T14:40:03.092507+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:03 vm07 bash[56315]: audit 2026-03-09T14:40:03.092507+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:03 vm07 bash[56315]: audit 2026-03-09T14:40:03.101089+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:03 vm07 bash[56315]: audit 2026-03-09T14:40:03.101089+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.655 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:40:03 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:40:03] "GET /metrics HTTP/1.1" 200 37760 "" "Prometheus/2.51.0" 2026-03-09T14:40:03.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:03 vm11 bash[43577]: cluster 2026-03-09T14:40:02.386515+0000 mon.a (mon.0) 293 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:03.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:03 vm11 bash[43577]: cluster 2026-03-09T14:40:02.386515+0000 mon.a (mon.0) 293 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:03.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:03 vm11 bash[43577]: cluster 2026-03-09T14:40:02.401075+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T14:40:03.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:03 vm11 bash[43577]: cluster 2026-03-09T14:40:02.401075+0000 mon.a (mon.0) 294 : cluster [DBG] osdmap e103: 8 total, 7 up, 8 in 2026-03-09T14:40:03.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:03 vm11 bash[43577]: audit 2026-03-09T14:40:02.597638+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:03 vm11 bash[43577]: audit 2026-03-09T14:40:02.597638+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:03 vm11 bash[43577]: audit 2026-03-09T14:40:03.092507+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:03 vm11 bash[43577]: audit 2026-03-09T14:40:03.092507+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:03 vm11 bash[43577]: audit 2026-03-09T14:40:03.101089+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:03.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:03 vm11 bash[43577]: audit 2026-03-09T14:40:03.101089+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.405 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:04 vm07 bash[63111]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T14:40:04.405 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:04 vm07 bash[63111]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:04.405 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:04 vm07 bash[63111]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:04.405 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:04 vm07 bash[63111]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T14:40:04.405 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:04 vm07 bash[63111]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ec7ace52-d7b3-4a08-b9c4-52f4637ee1e2/osd-block-01f1c7a2-0d56-449a-98b5-2d0134c34758 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-09T14:40:04.410 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:04 vm11 bash[41290]: ts=2026-03-09T14:40:04.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:04.668 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:04 vm07 bash[63111]: Running command: /usr/bin/ln -snf /dev/ceph-ec7ace52-d7b3-4a08-b9c4-52f4637ee1e2/osd-block-01f1c7a2-0d56-449a-98b5-2d0134c34758 /var/lib/ceph/osd/ceph-0/block 2026-03-09T14:40:04.669 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:04 vm07 bash[63111]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-09T14:40:04.669 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:04 vm07 bash[63111]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T14:40:04.669 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:04 vm07 bash[63111]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-09T14:40:04.669 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:04 vm07 bash[63111]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:04 vm07 bash[55244]: cluster 2026-03-09T14:40:02.539166+0000 mgr.y (mgr.44103) 119 : cluster [DBG] pgmap v49: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:04 vm07 bash[55244]: cluster 2026-03-09T14:40:02.539166+0000 mgr.y (mgr.44103) 119 : cluster [DBG] pgmap v49: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:04 vm07 bash[55244]: cluster 2026-03-09T14:40:03.446335+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:04 vm07 bash[55244]: cluster 2026-03-09T14:40:03.446335+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:04 vm07 bash[55244]: audit 2026-03-09T14:40:03.459398+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:04 vm07 bash[55244]: audit 2026-03-09T14:40:03.459398+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:04 vm07 bash[55244]: audit 2026-03-09T14:40:03.469261+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:04 vm07 bash[55244]: audit 2026-03-09T14:40:03.469261+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:04 vm07 bash[56315]: cluster 2026-03-09T14:40:02.539166+0000 mgr.y (mgr.44103) 119 : cluster [DBG] pgmap v49: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:04 vm07 bash[56315]: cluster 2026-03-09T14:40:02.539166+0000 mgr.y (mgr.44103) 119 : cluster [DBG] pgmap v49: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:04 vm07 bash[56315]: cluster 2026-03-09T14:40:03.446335+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:04 vm07 bash[56315]: cluster 2026-03-09T14:40:03.446335+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:04 vm07 bash[56315]: audit 2026-03-09T14:40:03.459398+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:04 vm07 bash[56315]: audit 2026-03-09T14:40:03.459398+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:04 vm07 bash[56315]: audit 2026-03-09T14:40:03.469261+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.669 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:04 vm07 bash[56315]: audit 2026-03-09T14:40:03.469261+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:04 vm11 bash[43577]: cluster 2026-03-09T14:40:02.539166+0000 mgr.y (mgr.44103) 119 : cluster [DBG] pgmap v49: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:04.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:04 vm11 bash[43577]: cluster 2026-03-09T14:40:02.539166+0000 mgr.y (mgr.44103) 119 : cluster [DBG] pgmap v49: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:04.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:04 vm11 bash[43577]: cluster 2026-03-09T14:40:03.446335+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T14:40:04.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:04 vm11 bash[43577]: cluster 2026-03-09T14:40:03.446335+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e104: 8 total, 7 up, 8 in 2026-03-09T14:40:04.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:04 vm11 bash[43577]: audit 2026-03-09T14:40:03.459398+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:04 vm11 bash[43577]: audit 2026-03-09T14:40:03.459398+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:04 vm11 bash[43577]: audit 2026-03-09T14:40:03.469261+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:04.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:04 vm11 bash[43577]: audit 2026-03-09T14:40:03.469261+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:05.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:05 vm11 bash[43577]: cluster 2026-03-09T14:40:05.416726+0000 mon.a (mon.0) 301 : cluster [WRN] Health check failed: Degraded data redundancy: 51/627 objects degraded (8.134%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:05.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:05 vm11 bash[43577]: cluster 2026-03-09T14:40:05.416726+0000 mon.a (mon.0) 301 : cluster [WRN] Health check failed: Degraded data redundancy: 51/627 objects degraded (8.134%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:05.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:05 vm07 bash[55244]: cluster 2026-03-09T14:40:05.416726+0000 mon.a (mon.0) 301 : cluster [WRN] Health check failed: Degraded data redundancy: 51/627 objects degraded (8.134%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:05.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:05 vm07 bash[55244]: cluster 2026-03-09T14:40:05.416726+0000 mon.a (mon.0) 301 : cluster [WRN] Health check failed: Degraded data redundancy: 51/627 objects degraded (8.134%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:05.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:05 vm07 bash[56315]: cluster 2026-03-09T14:40:05.416726+0000 mon.a (mon.0) 301 : cluster [WRN] Health check failed: Degraded data redundancy: 51/627 objects degraded (8.134%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:05.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:05 vm07 bash[56315]: cluster 2026-03-09T14:40:05.416726+0000 mon.a (mon.0) 301 : cluster [WRN] Health check failed: Degraded data redundancy: 51/627 objects degraded (8.134%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:05.905 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:05 vm07 bash[63469]: debug 2026-03-09T14:40:05.417+0000 7f65f3560740 -1 Falling back to public interface 2026-03-09T14:40:06.655 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:06 vm07 bash[63469]: debug 2026-03-09T14:40:06.365+0000 7f65f3560740 -1 osd.0 0 read_superblock omap replica is missing. 2026-03-09T14:40:06.655 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:06 vm07 bash[63469]: debug 2026-03-09T14:40:06.397+0000 7f65f3560740 -1 osd.0 102 log_to_monitors true 2026-03-09T14:40:06.655 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:06 vm07 bash[63469]: debug 2026-03-09T14:40:06.509+0000 7f65eb30b640 -1 osd.0 102 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:06 vm07 bash[55244]: cluster 2026-03-09T14:40:04.539713+0000 mgr.y (mgr.44103) 120 : cluster [DBG] pgmap v51: 161 pgs: 26 active+undersized, 12 peering, 14 active+undersized+degraded, 109 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 127 B/s rd, 0 op/s; 51/627 objects degraded (8.134%) 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:06 vm07 bash[55244]: cluster 2026-03-09T14:40:04.539713+0000 mgr.y (mgr.44103) 120 : cluster [DBG] pgmap v51: 161 pgs: 26 active+undersized, 12 peering, 14 active+undersized+degraded, 109 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 127 B/s rd, 0 op/s; 51/627 objects degraded (8.134%) 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:06 vm07 bash[55244]: audit 2026-03-09T14:40:06.403850+0000 mon.b (mon.2) 3 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:06 vm07 bash[55244]: audit 2026-03-09T14:40:06.403850+0000 mon.b (mon.2) 3 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:06 vm07 bash[55244]: audit 2026-03-09T14:40:06.408700+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:06 vm07 bash[55244]: audit 2026-03-09T14:40:06.408700+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:06 vm07 bash[56315]: cluster 2026-03-09T14:40:04.539713+0000 mgr.y (mgr.44103) 120 : cluster [DBG] pgmap v51: 161 pgs: 26 active+undersized, 12 peering, 14 active+undersized+degraded, 109 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 127 B/s rd, 0 op/s; 51/627 objects degraded (8.134%) 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:06 vm07 bash[56315]: cluster 2026-03-09T14:40:04.539713+0000 mgr.y (mgr.44103) 120 : cluster [DBG] pgmap v51: 161 pgs: 26 active+undersized, 12 peering, 14 active+undersized+degraded, 109 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 127 B/s rd, 0 op/s; 51/627 objects degraded (8.134%) 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:06 vm07 bash[56315]: audit 2026-03-09T14:40:06.403850+0000 mon.b (mon.2) 3 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:06 vm07 bash[56315]: audit 2026-03-09T14:40:06.403850+0000 mon.b (mon.2) 3 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:06 vm07 bash[56315]: audit 2026-03-09T14:40:06.408700+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:06 vm07 bash[56315]: audit 2026-03-09T14:40:06.408700+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:06 vm11 bash[43577]: cluster 2026-03-09T14:40:04.539713+0000 mgr.y (mgr.44103) 120 : cluster [DBG] pgmap v51: 161 pgs: 26 active+undersized, 12 peering, 14 active+undersized+degraded, 109 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 127 B/s rd, 0 op/s; 51/627 objects degraded (8.134%) 2026-03-09T14:40:06.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:06 vm11 bash[43577]: cluster 2026-03-09T14:40:04.539713+0000 mgr.y (mgr.44103) 120 : cluster [DBG] pgmap v51: 161 pgs: 26 active+undersized, 12 peering, 14 active+undersized+degraded, 109 active+clean; 457 KiB data, 143 MiB used, 160 GiB / 160 GiB avail; 127 B/s rd, 0 op/s; 51/627 objects degraded (8.134%) 2026-03-09T14:40:06.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:06 vm11 bash[43577]: audit 2026-03-09T14:40:06.403850+0000 mon.b (mon.2) 3 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:06 vm11 bash[43577]: audit 2026-03-09T14:40:06.403850+0000 mon.b (mon.2) 3 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:06 vm11 bash[43577]: audit 2026-03-09T14:40:06.408700+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:06.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:06 vm11 bash[43577]: audit 2026-03-09T14:40:06.408700+0000 mon.a (mon.0) 302 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T14:40:07.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:06 vm11 bash[41290]: ts=2026-03-09T14:40:06.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:07.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:07 vm11 bash[43577]: audit 2026-03-09T14:40:06.479288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:40:07.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:07 vm11 bash[43577]: audit 2026-03-09T14:40:06.479288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:40:07.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:07 vm11 bash[43577]: audit 2026-03-09T14:40:06.481787+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:07 vm11 bash[43577]: audit 2026-03-09T14:40:06.481787+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:07 vm11 bash[43577]: cluster 2026-03-09T14:40:06.484925+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-09T14:40:07.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:07 vm11 bash[43577]: cluster 2026-03-09T14:40:06.484925+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-09T14:40:07.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:07 vm11 bash[43577]: audit 2026-03-09T14:40:06.486419+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:07 vm11 bash[43577]: audit 2026-03-09T14:40:06.486419+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:07 vm11 bash[43577]: cluster 2026-03-09T14:40:06.540107+0000 mgr.y (mgr.44103) 121 : cluster [DBG] pgmap v53: 161 pgs: 30 active+undersized, 12 peering, 16 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 53/627 objects degraded (8.453%) 2026-03-09T14:40:07.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:07 vm11 bash[43577]: cluster 2026-03-09T14:40:06.540107+0000 mgr.y (mgr.44103) 121 : cluster [DBG] pgmap v53: 161 pgs: 30 active+undersized, 12 peering, 16 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 53/627 objects degraded (8.453%) 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:07 vm07 bash[55244]: audit 2026-03-09T14:40:06.479288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:07 vm07 bash[55244]: audit 2026-03-09T14:40:06.479288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:07 vm07 bash[55244]: audit 2026-03-09T14:40:06.481787+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:07 vm07 bash[55244]: audit 2026-03-09T14:40:06.481787+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:07 vm07 bash[55244]: cluster 2026-03-09T14:40:06.484925+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:07 vm07 bash[55244]: cluster 2026-03-09T14:40:06.484925+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:07 vm07 bash[55244]: audit 2026-03-09T14:40:06.486419+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:07 vm07 bash[55244]: audit 2026-03-09T14:40:06.486419+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:07 vm07 bash[55244]: cluster 2026-03-09T14:40:06.540107+0000 mgr.y (mgr.44103) 121 : cluster [DBG] pgmap v53: 161 pgs: 30 active+undersized, 12 peering, 16 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 53/627 objects degraded (8.453%) 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:07 vm07 bash[55244]: cluster 2026-03-09T14:40:06.540107+0000 mgr.y (mgr.44103) 121 : cluster [DBG] pgmap v53: 161 pgs: 30 active+undersized, 12 peering, 16 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 53/627 objects degraded (8.453%) 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:07 vm07 bash[56315]: audit 2026-03-09T14:40:06.479288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:07 vm07 bash[56315]: audit 2026-03-09T14:40:06.479288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:07 vm07 bash[56315]: audit 2026-03-09T14:40:06.481787+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:07 vm07 bash[56315]: audit 2026-03-09T14:40:06.481787+0000 mon.b (mon.2) 4 : audit [INF] from='osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:07 vm07 bash[56315]: cluster 2026-03-09T14:40:06.484925+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:07 vm07 bash[56315]: cluster 2026-03-09T14:40:06.484925+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:07 vm07 bash[56315]: audit 2026-03-09T14:40:06.486419+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:07 vm07 bash[56315]: audit 2026-03-09T14:40:06.486419+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:07 vm07 bash[56315]: cluster 2026-03-09T14:40:06.540107+0000 mgr.y (mgr.44103) 121 : cluster [DBG] pgmap v53: 161 pgs: 30 active+undersized, 12 peering, 16 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 53/627 objects degraded (8.453%) 2026-03-09T14:40:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:07 vm07 bash[56315]: cluster 2026-03-09T14:40:06.540107+0000 mgr.y (mgr.44103) 121 : cluster [DBG] pgmap v53: 161 pgs: 30 active+undersized, 12 peering, 16 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 53/627 objects degraded (8.453%) 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: cluster 2026-03-09T14:40:07.479538+0000 mon.a (mon.0) 306 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: cluster 2026-03-09T14:40:07.479538+0000 mon.a (mon.0) 306 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: cluster 2026-03-09T14:40:07.484760+0000 mon.a (mon.0) 307 : cluster [INF] osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974] boot 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: cluster 2026-03-09T14:40:07.484760+0000 mon.a (mon.0) 307 : cluster [INF] osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974] boot 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: cluster 2026-03-09T14:40:07.484797+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: cluster 2026-03-09T14:40:07.484797+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: audit 2026-03-09T14:40:07.492405+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: audit 2026-03-09T14:40:07.492405+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: audit 2026-03-09T14:40:07.506149+0000 mgr.y (mgr.44103) 122 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: audit 2026-03-09T14:40:07.506149+0000 mgr.y (mgr.44103) 122 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: audit 2026-03-09T14:40:07.574175+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:08 vm07 bash[55244]: audit 2026-03-09T14:40:07.574175+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: cluster 2026-03-09T14:40:07.479538+0000 mon.a (mon.0) 306 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: cluster 2026-03-09T14:40:07.479538+0000 mon.a (mon.0) 306 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: cluster 2026-03-09T14:40:07.484760+0000 mon.a (mon.0) 307 : cluster [INF] osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974] boot 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: cluster 2026-03-09T14:40:07.484760+0000 mon.a (mon.0) 307 : cluster [INF] osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974] boot 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: cluster 2026-03-09T14:40:07.484797+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: cluster 2026-03-09T14:40:07.484797+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: audit 2026-03-09T14:40:07.492405+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: audit 2026-03-09T14:40:07.492405+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: audit 2026-03-09T14:40:07.506149+0000 mgr.y (mgr.44103) 122 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: audit 2026-03-09T14:40:07.506149+0000 mgr.y (mgr.44103) 122 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: audit 2026-03-09T14:40:07.574175+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:08 vm07 bash[56315]: audit 2026-03-09T14:40:07.574175+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:09.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: cluster 2026-03-09T14:40:07.479538+0000 mon.a (mon.0) 306 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:09.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: cluster 2026-03-09T14:40:07.479538+0000 mon.a (mon.0) 306 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:09.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: cluster 2026-03-09T14:40:07.484760+0000 mon.a (mon.0) 307 : cluster [INF] osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974] boot 2026-03-09T14:40:09.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: cluster 2026-03-09T14:40:07.484760+0000 mon.a (mon.0) 307 : cluster [INF] osd.0 [v2:192.168.123.107:6802/1245966974,v1:192.168.123.107:6803/1245966974] boot 2026-03-09T14:40:09.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: cluster 2026-03-09T14:40:07.484797+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T14:40:09.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: cluster 2026-03-09T14:40:07.484797+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-09T14:40:09.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: audit 2026-03-09T14:40:07.492405+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:40:09.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: audit 2026-03-09T14:40:07.492405+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:40:09.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: audit 2026-03-09T14:40:07.506149+0000 mgr.y (mgr.44103) 122 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:09.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: audit 2026-03-09T14:40:07.506149+0000 mgr.y (mgr.44103) 122 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:09.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: audit 2026-03-09T14:40:07.574175+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:09.004 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:08 vm11 bash[43577]: audit 2026-03-09T14:40:07.574175+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:09 vm07 bash[55244]: cluster 2026-03-09T14:40:08.516914+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T14:40:09.945 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:09 vm07 bash[55244]: cluster 2026-03-09T14:40:08.516914+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T14:40:09.945 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:09 vm07 bash[55244]: cluster 2026-03-09T14:40:08.540583+0000 mgr.y (mgr.44103) 123 : cluster [DBG] pgmap v56: 161 pgs: 20 active+undersized, 12 activating, 16 peering, 10 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:09.945 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:09 vm07 bash[55244]: cluster 2026-03-09T14:40:08.540583+0000 mgr.y (mgr.44103) 123 : cluster [DBG] pgmap v56: 161 pgs: 20 active+undersized, 12 activating, 16 peering, 10 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:09.945 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:09 vm07 bash[56315]: cluster 2026-03-09T14:40:08.516914+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T14:40:09.945 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:09 vm07 bash[56315]: cluster 2026-03-09T14:40:08.516914+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T14:40:09.945 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:09 vm07 bash[56315]: cluster 2026-03-09T14:40:08.540583+0000 mgr.y (mgr.44103) 123 : cluster [DBG] pgmap v56: 161 pgs: 20 active+undersized, 12 activating, 16 peering, 10 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:09.945 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:09 vm07 bash[56315]: cluster 2026-03-09T14:40:08.540583+0000 mgr.y (mgr.44103) 123 : cluster [DBG] pgmap v56: 161 pgs: 20 active+undersized, 12 activating, 16 peering, 10 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:10.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:09 vm11 bash[43577]: cluster 2026-03-09T14:40:08.516914+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T14:40:10.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:09 vm11 bash[43577]: cluster 2026-03-09T14:40:08.516914+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-09T14:40:10.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:09 vm11 bash[43577]: cluster 2026-03-09T14:40:08.540583+0000 mgr.y (mgr.44103) 123 : cluster [DBG] pgmap v56: 161 pgs: 20 active+undersized, 12 activating, 16 peering, 10 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:10.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:09 vm11 bash[43577]: cluster 2026-03-09T14:40:08.540583+0000 mgr.y (mgr.44103) 123 : cluster [DBG] pgmap v56: 161 pgs: 20 active+undersized, 12 activating, 16 peering, 10 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:11.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:10 vm11 bash[43577]: audit 2026-03-09T14:40:09.729209+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:10 vm11 bash[43577]: audit 2026-03-09T14:40:09.729209+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:10 vm11 bash[43577]: audit 2026-03-09T14:40:09.736411+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:10 vm11 bash[43577]: audit 2026-03-09T14:40:09.736411+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:10 vm11 bash[43577]: audit 2026-03-09T14:40:10.304581+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:10 vm11 bash[43577]: audit 2026-03-09T14:40:10.304581+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:10 vm11 bash[43577]: audit 2026-03-09T14:40:10.310084+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:10 vm11 bash[43577]: audit 2026-03-09T14:40:10.310084+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:10 vm07 bash[55244]: audit 2026-03-09T14:40:09.729209+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:10 vm07 bash[55244]: audit 2026-03-09T14:40:09.729209+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:10 vm07 bash[55244]: audit 2026-03-09T14:40:09.736411+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:10 vm07 bash[55244]: audit 2026-03-09T14:40:09.736411+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:10 vm07 bash[55244]: audit 2026-03-09T14:40:10.304581+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:10 vm07 bash[55244]: audit 2026-03-09T14:40:10.304581+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:10 vm07 bash[55244]: audit 2026-03-09T14:40:10.310084+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:10 vm07 bash[55244]: audit 2026-03-09T14:40:10.310084+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:10 vm07 bash[56315]: audit 2026-03-09T14:40:09.729209+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:10 vm07 bash[56315]: audit 2026-03-09T14:40:09.729209+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:10 vm07 bash[56315]: audit 2026-03-09T14:40:09.736411+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:10 vm07 bash[56315]: audit 2026-03-09T14:40:09.736411+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:10 vm07 bash[56315]: audit 2026-03-09T14:40:10.304581+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:10 vm07 bash[56315]: audit 2026-03-09T14:40:10.304581+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:10 vm07 bash[56315]: audit 2026-03-09T14:40:10.310084+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:11.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:10 vm07 bash[56315]: audit 2026-03-09T14:40:10.310084+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:12.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:11 vm11 bash[43577]: cluster 2026-03-09T14:40:10.540971+0000 mgr.y (mgr.44103) 124 : cluster [DBG] pgmap v57: 161 pgs: 16 active+undersized, 12 activating, 21 peering, 9 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 30/627 objects degraded (4.785%) 2026-03-09T14:40:12.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:11 vm11 bash[43577]: cluster 2026-03-09T14:40:10.540971+0000 mgr.y (mgr.44103) 124 : cluster [DBG] pgmap v57: 161 pgs: 16 active+undersized, 12 activating, 21 peering, 9 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 30/627 objects degraded (4.785%) 2026-03-09T14:40:12.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:11 vm11 bash[43577]: cluster 2026-03-09T14:40:11.308787+0000 mon.a (mon.0) 316 : cluster [WRN] Health check update: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:12.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:11 vm11 bash[43577]: cluster 2026-03-09T14:40:11.308787+0000 mon.a (mon.0) 316 : cluster [WRN] Health check update: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:12.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:11 vm07 bash[55244]: cluster 2026-03-09T14:40:10.540971+0000 mgr.y (mgr.44103) 124 : cluster [DBG] pgmap v57: 161 pgs: 16 active+undersized, 12 activating, 21 peering, 9 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 30/627 objects degraded (4.785%) 2026-03-09T14:40:12.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:11 vm07 bash[55244]: cluster 2026-03-09T14:40:10.540971+0000 mgr.y (mgr.44103) 124 : cluster [DBG] pgmap v57: 161 pgs: 16 active+undersized, 12 activating, 21 peering, 9 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 30/627 objects degraded (4.785%) 2026-03-09T14:40:12.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:11 vm07 bash[55244]: cluster 2026-03-09T14:40:11.308787+0000 mon.a (mon.0) 316 : cluster [WRN] Health check update: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:12.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:11 vm07 bash[55244]: cluster 2026-03-09T14:40:11.308787+0000 mon.a (mon.0) 316 : cluster [WRN] Health check update: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:12.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:11 vm07 bash[56315]: cluster 2026-03-09T14:40:10.540971+0000 mgr.y (mgr.44103) 124 : cluster [DBG] pgmap v57: 161 pgs: 16 active+undersized, 12 activating, 21 peering, 9 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 30/627 objects degraded (4.785%) 2026-03-09T14:40:12.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:11 vm07 bash[56315]: cluster 2026-03-09T14:40:10.540971+0000 mgr.y (mgr.44103) 124 : cluster [DBG] pgmap v57: 161 pgs: 16 active+undersized, 12 activating, 21 peering, 9 active+undersized+degraded, 103 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 30/627 objects degraded (4.785%) 2026-03-09T14:40:12.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:11 vm07 bash[56315]: cluster 2026-03-09T14:40:11.308787+0000 mon.a (mon.0) 316 : cluster [WRN] Health check update: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:12.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:11 vm07 bash[56315]: cluster 2026-03-09T14:40:11.308787+0000 mon.a (mon.0) 316 : cluster [WRN] Health check update: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:13.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:12 vm07 bash[55244]: cluster 2026-03-09T14:40:12.732246+0000 mon.a (mon.0) 317 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded) 2026-03-09T14:40:13.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:12 vm07 bash[55244]: cluster 2026-03-09T14:40:12.732246+0000 mon.a (mon.0) 317 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded) 2026-03-09T14:40:13.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:12 vm07 bash[55244]: cluster 2026-03-09T14:40:12.732289+0000 mon.a (mon.0) 318 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:13.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:12 vm07 bash[55244]: cluster 2026-03-09T14:40:12.732289+0000 mon.a (mon.0) 318 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:13.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:12 vm07 bash[56315]: cluster 2026-03-09T14:40:12.732246+0000 mon.a (mon.0) 317 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded) 2026-03-09T14:40:13.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:12 vm07 bash[56315]: cluster 2026-03-09T14:40:12.732246+0000 mon.a (mon.0) 317 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded) 2026-03-09T14:40:13.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:12 vm07 bash[56315]: cluster 2026-03-09T14:40:12.732289+0000 mon.a (mon.0) 318 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:13.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:12 vm07 bash[56315]: cluster 2026-03-09T14:40:12.732289+0000 mon.a (mon.0) 318 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:13.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:12 vm11 bash[43577]: cluster 2026-03-09T14:40:12.732246+0000 mon.a (mon.0) 317 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded) 2026-03-09T14:40:13.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:12 vm11 bash[43577]: cluster 2026-03-09T14:40:12.732246+0000 mon.a (mon.0) 317 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 30/627 objects degraded (4.785%), 9 pgs degraded) 2026-03-09T14:40:13.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:12 vm11 bash[43577]: cluster 2026-03-09T14:40:12.732289+0000 mon.a (mon.0) 318 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:13.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:12 vm11 bash[43577]: cluster 2026-03-09T14:40:12.732289+0000 mon.a (mon.0) 318 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:13.779 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:40:13 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:40:13] "GET /metrics HTTP/1.1" 200 37768 "" "Prometheus/2.51.0" 2026-03-09T14:40:14.142 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:13 vm11 bash[43577]: cluster 2026-03-09T14:40:12.541311+0000 mgr.y (mgr.44103) 125 : cluster [DBG] pgmap v58: 161 pgs: 12 activating, 19 peering, 130 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 844 B/s rd, 0 op/s 2026-03-09T14:40:14.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:13 vm11 bash[43577]: cluster 2026-03-09T14:40:12.541311+0000 mgr.y (mgr.44103) 125 : cluster [DBG] pgmap v58: 161 pgs: 12 activating, 19 peering, 130 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 844 B/s rd, 0 op/s 2026-03-09T14:40:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:13 vm07 bash[56315]: cluster 2026-03-09T14:40:12.541311+0000 mgr.y (mgr.44103) 125 : cluster [DBG] pgmap v58: 161 pgs: 12 activating, 19 peering, 130 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 844 B/s rd, 0 op/s 2026-03-09T14:40:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:13 vm07 bash[56315]: cluster 2026-03-09T14:40:12.541311+0000 mgr.y (mgr.44103) 125 : cluster [DBG] pgmap v58: 161 pgs: 12 activating, 19 peering, 130 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 844 B/s rd, 0 op/s 2026-03-09T14:40:14.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:13 vm07 bash[55244]: cluster 2026-03-09T14:40:12.541311+0000 mgr.y (mgr.44103) 125 : cluster [DBG] pgmap v58: 161 pgs: 12 activating, 19 peering, 130 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 844 B/s rd, 0 op/s 2026-03-09T14:40:14.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:13 vm07 bash[55244]: cluster 2026-03-09T14:40:12.541311+0000 mgr.y (mgr.44103) 125 : cluster [DBG] pgmap v58: 161 pgs: 12 activating, 19 peering, 130 active+clean; 457 KiB data, 162 MiB used, 160 GiB / 160 GiB avail; 844 B/s rd, 0 op/s 2026-03-09T14:40:14.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:14 vm11 bash[41290]: ts=2026-03-09T14:40:14.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.1\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:15 vm07 bash[55244]: cluster 2026-03-09T14:40:14.541768+0000 mgr.y (mgr.44103) 126 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:16.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:15 vm07 bash[55244]: cluster 2026-03-09T14:40:14.541768+0000 mgr.y (mgr.44103) 126 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:16.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:15 vm07 bash[56315]: cluster 2026-03-09T14:40:14.541768+0000 mgr.y (mgr.44103) 126 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:16.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:15 vm07 bash[56315]: cluster 2026-03-09T14:40:14.541768+0000 mgr.y (mgr.44103) 126 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:16.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:15 vm11 bash[43577]: cluster 2026-03-09T14:40:14.541768+0000 mgr.y (mgr.44103) 126 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:16.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:15 vm11 bash[43577]: cluster 2026-03-09T14:40:14.541768+0000 mgr.y (mgr.44103) 126 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:17.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:16 vm11 bash[41290]: ts=2026-03-09T14:40:16.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: cluster 2026-03-09T14:40:16.542071+0000 mgr.y (mgr.44103) 127 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: cluster 2026-03-09T14:40:16.542071+0000 mgr.y (mgr.44103) 127 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.889185+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.889185+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.896183+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.896183+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.899324+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.899324+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.899921+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.899921+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.904558+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.904558+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.945747+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.945747+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.946850+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.946850+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.947567+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.947567+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.948050+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.948050+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.948612+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.948612+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.948744+0000 mgr.y (mgr.44103) 128 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.082 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:16.948744+0000 mgr.y (mgr.44103) 128 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: cephadm 2026-03-09T14:40:16.949226+0000 mgr.y (mgr.44103) 129 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: cephadm 2026-03-09T14:40:16.949226+0000 mgr.y (mgr.44103) 129 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: cephadm 2026-03-09T14:40:17.357225+0000 mgr.y (mgr.44103) 130 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: cephadm 2026-03-09T14:40:17.357225+0000 mgr.y (mgr.44103) 130 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:17.362031+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:17.362031+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:17.364845+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:17.364845+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:17.365932+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:17.365932+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: cephadm 2026-03-09T14:40:17.367247+0000 mgr.y (mgr.44103) 131 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: cephadm 2026-03-09T14:40:17.367247+0000 mgr.y (mgr.44103) 131 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:17.606226+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:17 vm07 bash[56315]: audit 2026-03-09T14:40:17.606226+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: cluster 2026-03-09T14:40:16.542071+0000 mgr.y (mgr.44103) 127 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: cluster 2026-03-09T14:40:16.542071+0000 mgr.y (mgr.44103) 127 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.889185+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.889185+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.896183+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.896183+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.899324+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.899324+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.899921+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.899921+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.904558+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.904558+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.945747+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.945747+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.946850+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.946850+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.947567+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.947567+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.948050+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.948050+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.948612+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.948612+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.948744+0000 mgr.y (mgr.44103) 128 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:16.948744+0000 mgr.y (mgr.44103) 128 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: cephadm 2026-03-09T14:40:16.949226+0000 mgr.y (mgr.44103) 129 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: cephadm 2026-03-09T14:40:16.949226+0000 mgr.y (mgr.44103) 129 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: cephadm 2026-03-09T14:40:17.357225+0000 mgr.y (mgr.44103) 130 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: cephadm 2026-03-09T14:40:17.357225+0000 mgr.y (mgr.44103) 130 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:17.362031+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:17.362031+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:17.364845+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:17.364845+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:17.365932+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:17.365932+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: cephadm 2026-03-09T14:40:17.367247+0000 mgr.y (mgr.44103) 131 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: cephadm 2026-03-09T14:40:17.367247+0000 mgr.y (mgr.44103) 131 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:17.606226+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.083 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:17 vm07 bash[55244]: audit 2026-03-09T14:40:17.606226+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: cluster 2026-03-09T14:40:16.542071+0000 mgr.y (mgr.44103) 127 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:18.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: cluster 2026-03-09T14:40:16.542071+0000 mgr.y (mgr.44103) 127 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:18.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.889185+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.889185+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.896183+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.896183+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.899324+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.899324+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.899921+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.899921+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.904558+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.904558+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.945747+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.945747+0000 mon.a (mon.0) 324 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.946850+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.946850+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.947567+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.947567+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.948050+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.948050+0000 mon.a (mon.0) 327 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.948612+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.948612+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.948744+0000 mgr.y (mgr.44103) 128 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:16.948744+0000 mgr.y (mgr.44103) 128 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: cephadm 2026-03-09T14:40:16.949226+0000 mgr.y (mgr.44103) 129 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: cephadm 2026-03-09T14:40:16.949226+0000 mgr.y (mgr.44103) 129 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: cephadm 2026-03-09T14:40:17.357225+0000 mgr.y (mgr.44103) 130 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: cephadm 2026-03-09T14:40:17.357225+0000 mgr.y (mgr.44103) 130 : cephadm [INF] Upgrade: Updating osd.1 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:17.362031+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:17.362031+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:17.364845+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:17.364845+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:17.365932+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:17.365932+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: cephadm 2026-03-09T14:40:17.367247+0000 mgr.y (mgr.44103) 131 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: cephadm 2026-03-09T14:40:17.367247+0000 mgr.y (mgr.44103) 131 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:17.606226+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:17 vm11 bash[43577]: audit 2026-03-09T14:40:17.606226+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:18.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:18 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:18.405 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:18 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:18.405 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:18 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:18.405 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:18 vm07 systemd[1]: Stopping Ceph osd.1 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:40:18.405 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:18 vm07 bash[28423]: debug 2026-03-09T14:40:18.218+0000 7f5469b09700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:40:18.405 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:18 vm07 bash[28423]: debug 2026-03-09T14:40:18.218+0000 7f5469b09700 -1 osd.1 107 *** Got signal Terminated *** 2026-03-09T14:40:18.405 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:18 vm07 bash[28423]: debug 2026-03-09T14:40:18.218+0000 7f5469b09700 -1 osd.1 107 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:40:18.405 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:40:18 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:18.405 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:40:18 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:18.405 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:40:18 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:18.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:18 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:18.405 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:40:18 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:18.405 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:40:18 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:19.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:18 vm07 bash[55244]: audit 2026-03-09T14:40:17.514144+0000 mgr.y (mgr.44103) 132 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:19.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:18 vm07 bash[55244]: audit 2026-03-09T14:40:17.514144+0000 mgr.y (mgr.44103) 132 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:19.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:18 vm07 bash[55244]: cluster 2026-03-09T14:40:18.223919+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T14:40:19.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:18 vm07 bash[55244]: cluster 2026-03-09T14:40:18.223919+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T14:40:19.155 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:19 vm07 bash[64960]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-1 2026-03-09T14:40:19.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:18 vm07 bash[56315]: audit 2026-03-09T14:40:17.514144+0000 mgr.y (mgr.44103) 132 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:19.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:18 vm07 bash[56315]: audit 2026-03-09T14:40:17.514144+0000 mgr.y (mgr.44103) 132 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:19.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:18 vm07 bash[56315]: cluster 2026-03-09T14:40:18.223919+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T14:40:19.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:18 vm07 bash[56315]: cluster 2026-03-09T14:40:18.223919+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T14:40:19.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:18 vm11 bash[43577]: audit 2026-03-09T14:40:17.514144+0000 mgr.y (mgr.44103) 132 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:19.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:18 vm11 bash[43577]: audit 2026-03-09T14:40:17.514144+0000 mgr.y (mgr.44103) 132 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:19.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:18 vm11 bash[43577]: cluster 2026-03-09T14:40:18.223919+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T14:40:19.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:18 vm11 bash[43577]: cluster 2026-03-09T14:40:18.223919+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 marked itself down and dead 2026-03-09T14:40:19.497 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:19.498 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:19.498 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:19.498 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.1.service: Deactivated successfully. 2026-03-09T14:40:19.498 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: Stopped Ceph osd.1 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:40:19.498 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:19.498 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:19.498 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:19.498 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:19.498 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:19.498 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:19.904 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:19 vm07 systemd[1]: Started Ceph osd.1 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:40:19.905 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:19 vm07 bash[65171]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:19.905 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:19 vm07 bash[65171]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:20.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:19 vm11 bash[43577]: cluster 2026-03-09T14:40:18.542495+0000 mgr.y (mgr.44103) 133 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1021 B/s rd, 0 op/s 2026-03-09T14:40:20.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:19 vm11 bash[43577]: cluster 2026-03-09T14:40:18.542495+0000 mgr.y (mgr.44103) 133 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1021 B/s rd, 0 op/s 2026-03-09T14:40:20.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:19 vm11 bash[43577]: cluster 2026-03-09T14:40:18.893133+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:20.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:19 vm11 bash[43577]: cluster 2026-03-09T14:40:18.893133+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:20.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:19 vm11 bash[43577]: cluster 2026-03-09T14:40:18.906608+0000 mon.a (mon.0) 335 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T14:40:20.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:19 vm11 bash[43577]: cluster 2026-03-09T14:40:18.906608+0000 mon.a (mon.0) 335 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T14:40:20.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:19 vm11 bash[43577]: audit 2026-03-09T14:40:19.534914+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:19 vm11 bash[43577]: audit 2026-03-09T14:40:19.534914+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:19 vm11 bash[43577]: audit 2026-03-09T14:40:19.541121+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:19 vm11 bash[43577]: audit 2026-03-09T14:40:19.541121+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 bash[55244]: cluster 2026-03-09T14:40:18.542495+0000 mgr.y (mgr.44103) 133 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1021 B/s rd, 0 op/s 2026-03-09T14:40:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 bash[55244]: cluster 2026-03-09T14:40:18.542495+0000 mgr.y (mgr.44103) 133 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1021 B/s rd, 0 op/s 2026-03-09T14:40:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 bash[55244]: cluster 2026-03-09T14:40:18.893133+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 bash[55244]: cluster 2026-03-09T14:40:18.893133+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 bash[55244]: cluster 2026-03-09T14:40:18.906608+0000 mon.a (mon.0) 335 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T14:40:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 bash[55244]: cluster 2026-03-09T14:40:18.906608+0000 mon.a (mon.0) 335 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T14:40:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 bash[55244]: audit 2026-03-09T14:40:19.534914+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 bash[55244]: audit 2026-03-09T14:40:19.534914+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 bash[55244]: audit 2026-03-09T14:40:19.541121+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:19 vm07 bash[55244]: audit 2026-03-09T14:40:19.541121+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 bash[56315]: cluster 2026-03-09T14:40:18.542495+0000 mgr.y (mgr.44103) 133 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1021 B/s rd, 0 op/s 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 bash[56315]: cluster 2026-03-09T14:40:18.542495+0000 mgr.y (mgr.44103) 133 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1021 B/s rd, 0 op/s 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 bash[56315]: cluster 2026-03-09T14:40:18.893133+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 bash[56315]: cluster 2026-03-09T14:40:18.893133+0000 mon.a (mon.0) 334 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 bash[56315]: cluster 2026-03-09T14:40:18.906608+0000 mon.a (mon.0) 335 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 bash[56315]: cluster 2026-03-09T14:40:18.906608+0000 mon.a (mon.0) 335 : cluster [DBG] osdmap e108: 8 total, 7 up, 8 in 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 bash[56315]: audit 2026-03-09T14:40:19.534914+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 bash[56315]: audit 2026-03-09T14:40:19.534914+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 bash[56315]: audit 2026-03-09T14:40:19.541121+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:19 vm07 bash[56315]: audit 2026-03-09T14:40:19.541121+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:20.904 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:20 vm07 bash[65171]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T14:40:20.904 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:20 vm07 bash[65171]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:20.904 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:20 vm07 bash[65171]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:20.904 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:20 vm07 bash[65171]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T14:40:20.904 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:20 vm07 bash[65171]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-72073887-8111-40e9-a332-047e4294a2cd/osd-block-c5bcdd68-0c8f-46dc-8a25-561605efa0ff --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-09T14:40:21.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:20 vm11 bash[43577]: cluster 2026-03-09T14:40:19.939913+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T14:40:21.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:20 vm11 bash[43577]: cluster 2026-03-09T14:40:19.939913+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T14:40:21.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:20 vm07 bash[55244]: cluster 2026-03-09T14:40:19.939913+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T14:40:21.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:20 vm07 bash[55244]: cluster 2026-03-09T14:40:19.939913+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T14:40:21.404 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:20 vm07 bash[65171]: Running command: /usr/bin/ln -snf /dev/ceph-72073887-8111-40e9-a332-047e4294a2cd/osd-block-c5bcdd68-0c8f-46dc-8a25-561605efa0ff /var/lib/ceph/osd/ceph-1/block 2026-03-09T14:40:21.404 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:20 vm07 bash[65171]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-09T14:40:21.404 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:20 vm07 bash[65171]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T14:40:21.404 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:20 vm07 bash[65171]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-09T14:40:21.404 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:20 vm07 bash[65171]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-09T14:40:21.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:20 vm07 bash[56315]: cluster 2026-03-09T14:40:19.939913+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T14:40:21.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:20 vm07 bash[56315]: cluster 2026-03-09T14:40:19.939913+0000 mon.a (mon.0) 338 : cluster [DBG] osdmap e109: 8 total, 7 up, 8 in 2026-03-09T14:40:22.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:21 vm07 bash[55244]: cluster 2026-03-09T14:40:20.542815+0000 mgr.y (mgr.44103) 134 : cluster [DBG] pgmap v64: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:22.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:21 vm07 bash[55244]: cluster 2026-03-09T14:40:20.542815+0000 mgr.y (mgr.44103) 134 : cluster [DBG] pgmap v64: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:22.155 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:21 vm07 bash[65524]: debug 2026-03-09T14:40:21.766+0000 7f75cb10f740 -1 Falling back to public interface 2026-03-09T14:40:22.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:21 vm07 bash[56315]: cluster 2026-03-09T14:40:20.542815+0000 mgr.y (mgr.44103) 134 : cluster [DBG] pgmap v64: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:22.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:21 vm07 bash[56315]: cluster 2026-03-09T14:40:20.542815+0000 mgr.y (mgr.44103) 134 : cluster [DBG] pgmap v64: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:22.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:21 vm11 bash[43577]: cluster 2026-03-09T14:40:20.542815+0000 mgr.y (mgr.44103) 134 : cluster [DBG] pgmap v64: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:22.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:21 vm11 bash[43577]: cluster 2026-03-09T14:40:20.542815+0000 mgr.y (mgr.44103) 134 : cluster [DBG] pgmap v64: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:40:23.154 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:22 vm07 bash[65524]: debug 2026-03-09T14:40:22.718+0000 7f75cb10f740 -1 osd.1 0 read_superblock omap replica is missing. 2026-03-09T14:40:23.154 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:22 vm07 bash[65524]: debug 2026-03-09T14:40:22.730+0000 7f75cb10f740 -1 osd.1 107 log_to_monitors true 2026-03-09T14:40:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: cluster 2026-03-09T14:40:22.543207+0000 mgr.y (mgr.44103) 135 : cluster [DBG] pgmap v65: 161 pgs: 8 active+undersized, 16 stale+active+clean, 5 active+undersized+degraded, 132 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 27/627 objects degraded (4.306%) 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: cluster 2026-03-09T14:40:22.543207+0000 mgr.y (mgr.44103) 135 : cluster [DBG] pgmap v65: 161 pgs: 8 active+undersized, 16 stale+active+clean, 5 active+undersized+degraded, 132 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 27/627 objects degraded (4.306%) 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: audit 2026-03-09T14:40:22.577553+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: audit 2026-03-09T14:40:22.577553+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: audit 2026-03-09T14:40:22.579100+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: audit 2026-03-09T14:40:22.579100+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: audit 2026-03-09T14:40:22.739716+0000 mon.c (mon.1) 14 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: audit 2026-03-09T14:40:22.739716+0000 mon.c (mon.1) 14 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: audit 2026-03-09T14:40:22.740115+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: audit 2026-03-09T14:40:22.740115+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: cluster 2026-03-09T14:40:22.946397+0000 mon.a (mon.0) 342 : cluster [WRN] Health check failed: Degraded data redundancy: 27/627 objects degraded (4.306%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:23 vm07 bash[55244]: cluster 2026-03-09T14:40:22.946397+0000 mon.a (mon.0) 342 : cluster [WRN] Health check failed: Degraded data redundancy: 27/627 objects degraded (4.306%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: cluster 2026-03-09T14:40:22.543207+0000 mgr.y (mgr.44103) 135 : cluster [DBG] pgmap v65: 161 pgs: 8 active+undersized, 16 stale+active+clean, 5 active+undersized+degraded, 132 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 27/627 objects degraded (4.306%) 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: cluster 2026-03-09T14:40:22.543207+0000 mgr.y (mgr.44103) 135 : cluster [DBG] pgmap v65: 161 pgs: 8 active+undersized, 16 stale+active+clean, 5 active+undersized+degraded, 132 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 27/627 objects degraded (4.306%) 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: audit 2026-03-09T14:40:22.577553+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: audit 2026-03-09T14:40:22.577553+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: audit 2026-03-09T14:40:22.579100+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: audit 2026-03-09T14:40:22.579100+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: audit 2026-03-09T14:40:22.739716+0000 mon.c (mon.1) 14 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: audit 2026-03-09T14:40:22.739716+0000 mon.c (mon.1) 14 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: audit 2026-03-09T14:40:22.740115+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: audit 2026-03-09T14:40:22.740115+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: cluster 2026-03-09T14:40:22.946397+0000 mon.a (mon.0) 342 : cluster [WRN] Health check failed: Degraded data redundancy: 27/627 objects degraded (4.306%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:23 vm07 bash[56315]: cluster 2026-03-09T14:40:22.946397+0000 mon.a (mon.0) 342 : cluster [WRN] Health check failed: Degraded data redundancy: 27/627 objects degraded (4.306%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:23.905 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:40:23 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:40:23] "GET /metrics HTTP/1.1" 200 37980 "" "Prometheus/2.51.0" 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: cluster 2026-03-09T14:40:22.543207+0000 mgr.y (mgr.44103) 135 : cluster [DBG] pgmap v65: 161 pgs: 8 active+undersized, 16 stale+active+clean, 5 active+undersized+degraded, 132 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 27/627 objects degraded (4.306%) 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: cluster 2026-03-09T14:40:22.543207+0000 mgr.y (mgr.44103) 135 : cluster [DBG] pgmap v65: 161 pgs: 8 active+undersized, 16 stale+active+clean, 5 active+undersized+degraded, 132 active+clean; 457 KiB data, 163 MiB used, 160 GiB / 160 GiB avail; 27/627 objects degraded (4.306%) 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: audit 2026-03-09T14:40:22.577553+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: audit 2026-03-09T14:40:22.577553+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: audit 2026-03-09T14:40:22.579100+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: audit 2026-03-09T14:40:22.579100+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: audit 2026-03-09T14:40:22.739716+0000 mon.c (mon.1) 14 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: audit 2026-03-09T14:40:22.739716+0000 mon.c (mon.1) 14 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: audit 2026-03-09T14:40:22.740115+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: audit 2026-03-09T14:40:22.740115+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: cluster 2026-03-09T14:40:22.946397+0000 mon.a (mon.0) 342 : cluster [WRN] Health check failed: Degraded data redundancy: 27/627 objects degraded (4.306%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:23 vm11 bash[43577]: cluster 2026-03-09T14:40:22.946397+0000 mon.a (mon.0) 342 : cluster [WRN] Health check failed: Degraded data redundancy: 27/627 objects degraded (4.306%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:24.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:24 vm11 bash[41290]: ts=2026-03-09T14:40:24.146Z caller=alerting.go:391 level=warn component="rule manager" alert="unsupported value type" msg="Expanding alert template failed" err="error executing template __alert_CephOSDDown: template: __alert_CephOSDDown:1:358: executing \"__alert_CephOSDDown\" at : error calling query: found duplicate series for the match group {ceph_daemon=\"osd.1\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" data="unsupported value type" 2026-03-09T14:40:24.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:24 vm11 bash[41290]: ts=2026-03-09T14:40:24.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.1\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:24.905 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:40:24 vm07 bash[65524]: debug 2026-03-09T14:40:24.422+0000 7f75c26b9640 -1 osd.1 107 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:24 vm07 bash[56315]: audit 2026-03-09T14:40:23.586270+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:24 vm07 bash[56315]: audit 2026-03-09T14:40:23.586270+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:24 vm07 bash[56315]: cluster 2026-03-09T14:40:23.591900+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:24 vm07 bash[56315]: cluster 2026-03-09T14:40:23.591900+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:24 vm07 bash[56315]: audit 2026-03-09T14:40:23.595150+0000 mon.c (mon.1) 15 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:24 vm07 bash[56315]: audit 2026-03-09T14:40:23.595150+0000 mon.c (mon.1) 15 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:24 vm07 bash[56315]: audit 2026-03-09T14:40:23.595411+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:24 vm07 bash[56315]: audit 2026-03-09T14:40:23.595411+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:24 vm07 bash[55244]: audit 2026-03-09T14:40:23.586270+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:24 vm07 bash[55244]: audit 2026-03-09T14:40:23.586270+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:24 vm07 bash[55244]: cluster 2026-03-09T14:40:23.591900+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:24 vm07 bash[55244]: cluster 2026-03-09T14:40:23.591900+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:24 vm07 bash[55244]: audit 2026-03-09T14:40:23.595150+0000 mon.c (mon.1) 15 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:24 vm07 bash[55244]: audit 2026-03-09T14:40:23.595150+0000 mon.c (mon.1) 15 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:24 vm07 bash[55244]: audit 2026-03-09T14:40:23.595411+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:24 vm07 bash[55244]: audit 2026-03-09T14:40:23.595411+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:25.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:24 vm11 bash[43577]: audit 2026-03-09T14:40:23.586270+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:40:25.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:24 vm11 bash[43577]: audit 2026-03-09T14:40:23.586270+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T14:40:25.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:24 vm11 bash[43577]: cluster 2026-03-09T14:40:23.591900+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-09T14:40:25.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:24 vm11 bash[43577]: cluster 2026-03-09T14:40:23.591900+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-09T14:40:25.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:24 vm11 bash[43577]: audit 2026-03-09T14:40:23.595150+0000 mon.c (mon.1) 15 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:25.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:24 vm11 bash[43577]: audit 2026-03-09T14:40:23.595150+0000 mon.c (mon.1) 15 : audit [INF] from='osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:25.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:24 vm11 bash[43577]: audit 2026-03-09T14:40:23.595411+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:25.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:24 vm11 bash[43577]: audit 2026-03-09T14:40:23.595411+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:25 vm07 bash[56315]: cluster 2026-03-09T14:40:24.543526+0000 mgr.y (mgr.44103) 136 : cluster [DBG] pgmap v67: 161 pgs: 38 active+undersized, 25 active+undersized+degraded, 98 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 88/627 objects degraded (14.035%) 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:25 vm07 bash[56315]: cluster 2026-03-09T14:40:24.543526+0000 mgr.y (mgr.44103) 136 : cluster [DBG] pgmap v67: 161 pgs: 38 active+undersized, 25 active+undersized+degraded, 98 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 88/627 objects degraded (14.035%) 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:25 vm07 bash[56315]: cluster 2026-03-09T14:40:24.592809+0000 mon.a (mon.0) 346 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:25 vm07 bash[56315]: cluster 2026-03-09T14:40:24.592809+0000 mon.a (mon.0) 346 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:25 vm07 bash[56315]: cluster 2026-03-09T14:40:24.615864+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660] boot 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:25 vm07 bash[56315]: cluster 2026-03-09T14:40:24.615864+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660] boot 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:25 vm07 bash[56315]: cluster 2026-03-09T14:40:24.616004+0000 mon.a (mon.0) 348 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:25 vm07 bash[56315]: cluster 2026-03-09T14:40:24.616004+0000 mon.a (mon.0) 348 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:25 vm07 bash[56315]: audit 2026-03-09T14:40:24.620490+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:25 vm07 bash[56315]: audit 2026-03-09T14:40:24.620490+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:25 vm07 bash[55244]: cluster 2026-03-09T14:40:24.543526+0000 mgr.y (mgr.44103) 136 : cluster [DBG] pgmap v67: 161 pgs: 38 active+undersized, 25 active+undersized+degraded, 98 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 88/627 objects degraded (14.035%) 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:25 vm07 bash[55244]: cluster 2026-03-09T14:40:24.543526+0000 mgr.y (mgr.44103) 136 : cluster [DBG] pgmap v67: 161 pgs: 38 active+undersized, 25 active+undersized+degraded, 98 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 88/627 objects degraded (14.035%) 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:25 vm07 bash[55244]: cluster 2026-03-09T14:40:24.592809+0000 mon.a (mon.0) 346 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:25 vm07 bash[55244]: cluster 2026-03-09T14:40:24.592809+0000 mon.a (mon.0) 346 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:25 vm07 bash[55244]: cluster 2026-03-09T14:40:24.615864+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660] boot 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:25 vm07 bash[55244]: cluster 2026-03-09T14:40:24.615864+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660] boot 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:25 vm07 bash[55244]: cluster 2026-03-09T14:40:24.616004+0000 mon.a (mon.0) 348 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:25 vm07 bash[55244]: cluster 2026-03-09T14:40:24.616004+0000 mon.a (mon.0) 348 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:25 vm07 bash[55244]: audit 2026-03-09T14:40:24.620490+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:40:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:25 vm07 bash[55244]: audit 2026-03-09T14:40:24.620490+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:40:26.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:25 vm11 bash[43577]: cluster 2026-03-09T14:40:24.543526+0000 mgr.y (mgr.44103) 136 : cluster [DBG] pgmap v67: 161 pgs: 38 active+undersized, 25 active+undersized+degraded, 98 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 88/627 objects degraded (14.035%) 2026-03-09T14:40:26.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:25 vm11 bash[43577]: cluster 2026-03-09T14:40:24.543526+0000 mgr.y (mgr.44103) 136 : cluster [DBG] pgmap v67: 161 pgs: 38 active+undersized, 25 active+undersized+degraded, 98 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 88/627 objects degraded (14.035%) 2026-03-09T14:40:26.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:25 vm11 bash[43577]: cluster 2026-03-09T14:40:24.592809+0000 mon.a (mon.0) 346 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:26.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:25 vm11 bash[43577]: cluster 2026-03-09T14:40:24.592809+0000 mon.a (mon.0) 346 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:26.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:25 vm11 bash[43577]: cluster 2026-03-09T14:40:24.615864+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660] boot 2026-03-09T14:40:26.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:25 vm11 bash[43577]: cluster 2026-03-09T14:40:24.615864+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 [v2:192.168.123.107:6810/335417660,v1:192.168.123.107:6811/335417660] boot 2026-03-09T14:40:26.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:25 vm11 bash[43577]: cluster 2026-03-09T14:40:24.616004+0000 mon.a (mon.0) 348 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T14:40:26.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:25 vm11 bash[43577]: cluster 2026-03-09T14:40:24.616004+0000 mon.a (mon.0) 348 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-09T14:40:26.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:25 vm11 bash[43577]: audit 2026-03-09T14:40:24.620490+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:40:26.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:25 vm11 bash[43577]: audit 2026-03-09T14:40:24.620490+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:40:26.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: cluster 2026-03-09T14:40:24.425902+0000 osd.1 (osd.1) 1 : cluster [WRN] OSD bench result of 29849.117487 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:40:26.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: cluster 2026-03-09T14:40:24.425902+0000 osd.1 (osd.1) 1 : cluster [WRN] OSD bench result of 29849.117487 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:40:26.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: cluster 2026-03-09T14:40:25.629406+0000 mon.a (mon.0) 350 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T14:40:26.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: cluster 2026-03-09T14:40:25.629406+0000 mon.a (mon.0) 350 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T14:40:26.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: audit 2026-03-09T14:40:26.023494+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: audit 2026-03-09T14:40:26.023494+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.643 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: audit 2026-03-09T14:40:26.029726+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.644 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: audit 2026-03-09T14:40:26.029726+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: audit 2026-03-09T14:40:26.618745+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: audit 2026-03-09T14:40:26.618745+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: audit 2026-03-09T14:40:26.624569+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:26 vm07 bash[56315]: audit 2026-03-09T14:40:26.624569+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: cluster 2026-03-09T14:40:24.425902+0000 osd.1 (osd.1) 1 : cluster [WRN] OSD bench result of 29849.117487 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: cluster 2026-03-09T14:40:24.425902+0000 osd.1 (osd.1) 1 : cluster [WRN] OSD bench result of 29849.117487 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: cluster 2026-03-09T14:40:25.629406+0000 mon.a (mon.0) 350 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: cluster 2026-03-09T14:40:25.629406+0000 mon.a (mon.0) 350 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: audit 2026-03-09T14:40:26.023494+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: audit 2026-03-09T14:40:26.023494+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: audit 2026-03-09T14:40:26.029726+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: audit 2026-03-09T14:40:26.029726+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: audit 2026-03-09T14:40:26.618745+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: audit 2026-03-09T14:40:26.618745+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: audit 2026-03-09T14:40:26.624569+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:26 vm07 bash[55244]: audit 2026-03-09T14:40:26.624569+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: cluster 2026-03-09T14:40:24.425902+0000 osd.1 (osd.1) 1 : cluster [WRN] OSD bench result of 29849.117487 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: cluster 2026-03-09T14:40:24.425902+0000 osd.1 (osd.1) 1 : cluster [WRN] OSD bench result of 29849.117487 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: cluster 2026-03-09T14:40:25.629406+0000 mon.a (mon.0) 350 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: cluster 2026-03-09T14:40:25.629406+0000 mon.a (mon.0) 350 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: audit 2026-03-09T14:40:26.023494+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: audit 2026-03-09T14:40:26.023494+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: audit 2026-03-09T14:40:26.029726+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: audit 2026-03-09T14:40:26.029726+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: audit 2026-03-09T14:40:26.618745+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: audit 2026-03-09T14:40:26.618745+0000 mon.a (mon.0) 353 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: audit 2026-03-09T14:40:26.624569+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:26 vm11 bash[43577]: audit 2026-03-09T14:40:26.624569+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:27.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:26 vm11 bash[41290]: ts=2026-03-09T14:40:26.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:28.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:28 vm07 bash[56315]: cluster 2026-03-09T14:40:26.544051+0000 mgr.y (mgr.44103) 137 : cluster [DBG] pgmap v70: 161 pgs: 31 active+undersized, 23 active+undersized+degraded, 107 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:40:28.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:28 vm07 bash[56315]: cluster 2026-03-09T14:40:26.544051+0000 mgr.y (mgr.44103) 137 : cluster [DBG] pgmap v70: 161 pgs: 31 active+undersized, 23 active+undersized+degraded, 107 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:40:28.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:28 vm07 bash[56315]: cluster 2026-03-09T14:40:27.963374+0000 mon.a (mon.0) 355 : cluster [WRN] Health check update: Degraded data redundancy: 81/627 objects degraded (12.919%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:28.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:28 vm07 bash[56315]: cluster 2026-03-09T14:40:27.963374+0000 mon.a (mon.0) 355 : cluster [WRN] Health check update: Degraded data redundancy: 81/627 objects degraded (12.919%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:28.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:28 vm07 bash[55244]: cluster 2026-03-09T14:40:26.544051+0000 mgr.y (mgr.44103) 137 : cluster [DBG] pgmap v70: 161 pgs: 31 active+undersized, 23 active+undersized+degraded, 107 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:40:28.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:28 vm07 bash[55244]: cluster 2026-03-09T14:40:26.544051+0000 mgr.y (mgr.44103) 137 : cluster [DBG] pgmap v70: 161 pgs: 31 active+undersized, 23 active+undersized+degraded, 107 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:40:28.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:28 vm07 bash[55244]: cluster 2026-03-09T14:40:27.963374+0000 mon.a (mon.0) 355 : cluster [WRN] Health check update: Degraded data redundancy: 81/627 objects degraded (12.919%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:28.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:28 vm07 bash[55244]: cluster 2026-03-09T14:40:27.963374+0000 mon.a (mon.0) 355 : cluster [WRN] Health check update: Degraded data redundancy: 81/627 objects degraded (12.919%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:28.499 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:40:28.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:28 vm11 bash[43577]: cluster 2026-03-09T14:40:26.544051+0000 mgr.y (mgr.44103) 137 : cluster [DBG] pgmap v70: 161 pgs: 31 active+undersized, 23 active+undersized+degraded, 107 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:40:28.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:28 vm11 bash[43577]: cluster 2026-03-09T14:40:26.544051+0000 mgr.y (mgr.44103) 137 : cluster [DBG] pgmap v70: 161 pgs: 31 active+undersized, 23 active+undersized+degraded, 107 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 81/627 objects degraded (12.919%) 2026-03-09T14:40:28.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:28 vm11 bash[43577]: cluster 2026-03-09T14:40:27.963374+0000 mon.a (mon.0) 355 : cluster [WRN] Health check update: Degraded data redundancy: 81/627 objects degraded (12.919%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:28.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:28 vm11 bash[43577]: cluster 2026-03-09T14:40:27.963374+0000 mon.a (mon.0) 355 : cluster [WRN] Health check update: Degraded data redundancy: 81/627 objects degraded (12.919%), 23 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (2m) 2s ago 7m 13.6M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (2m) 75s ago 7m 37.3M - dad864ee21e9 614f6a00be7a 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (2m) 2s ago 7m 43.0M - 3.5 e1d6a67b021e e3b30dab288c 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 running (2m) 75s ago 10m 464M - 19.2.3-678-ge911bdeb 654f31e6858e d35dddd392d1 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (2m) 2s ago 11m 528M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (101s) 2s ago 11m 44.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bcdaa5dfc948 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (81s) 75s ago 10m 19.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1caba9bf8a13 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (115s) 2s ago 10m 42.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ff7dfe3a6c7c 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (2m) 2s ago 8m 7591k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (2m) 75s ago 8m 7231k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (24s) 2s ago 10m 45.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 24632814894d 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (7s) 2s ago 9m 31.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1f773b5d0f68 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (40s) 2s ago 9m 65.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7d943c2f091c 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (57s) 2s ago 9m 48.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7c234b83449a 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (9m) 75s ago 9m 51.4M 4096M 17.2.0 e1d6a67b021e 172516d931e5 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (8m) 75s ago 8m 49.0M 4096M 17.2.0 e1d6a67b021e d7defb26b5d1 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (8m) 75s ago 8m 49.2M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (8m) 75s ago 8m 49.3M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (2m) 75s ago 7m 43.2M - 2.51.0 1d3b7f56885b e88f0339687c 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (7m) 2s ago 7m 85.8M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (7m) 75s ago 7m 84.7M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (7m) 2s ago 7m 85.3M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:40:28.885 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (7m) 75s ago 7m 84.8M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4, 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:40:29.153 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8, 2026-03-09T14:40:29.153 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 9 2026-03-09T14:40:29.153 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:40:29.153 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout: "mgr", 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout: "mon" 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "9/23 daemons upgraded", 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:40:29.349 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:40:29.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:29 vm07 bash[56315]: audit 2026-03-09T14:40:27.522035+0000 mgr.y (mgr.44103) 138 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:29.410 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:29 vm07 bash[56315]: audit 2026-03-09T14:40:27.522035+0000 mgr.y (mgr.44103) 138 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:29.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:29 vm07 bash[55244]: audit 2026-03-09T14:40:27.522035+0000 mgr.y (mgr.44103) 138 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:29.410 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:29 vm07 bash[55244]: audit 2026-03-09T14:40:27.522035+0000 mgr.y (mgr.44103) 138 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:29.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:29 vm11 bash[43577]: audit 2026-03-09T14:40:27.522035+0000 mgr.y (mgr.44103) 138 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:29.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:29 vm11 bash[43577]: audit 2026-03-09T14:40:27.522035+0000 mgr.y (mgr.44103) 138 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:29.594 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_WARN Degraded data redundancy: 31/627 objects degraded (4.944%), 7 pgs degraded 2026-03-09T14:40:29.594 INFO:teuthology.orchestra.run.vm07.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 31/627 objects degraded (4.944%), 7 pgs degraded 2026-03-09T14:40:29.594 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.4 is active+undersized+degraded, acting [0,7] 2026-03-09T14:40:29.594 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.9 is active+undersized+degraded, acting [7,3] 2026-03-09T14:40:29.594 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.a is active+undersized+degraded, acting [3,7] 2026-03-09T14:40:29.594 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.d is active+undersized+degraded, acting [4,3] 2026-03-09T14:40:29.594 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.0 is active+undersized+degraded, acting [2,6] 2026-03-09T14:40:29.594 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.4 is active+undersized+degraded, acting [2,5] 2026-03-09T14:40:29.594 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.19 is active+undersized+degraded, acting [3,4] 2026-03-09T14:40:30.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:28.497252+0000 mgr.y (mgr.44103) 139 : audit [DBG] from='client.54158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:28.497252+0000 mgr.y (mgr.44103) 139 : audit [DBG] from='client.54158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: cluster 2026-03-09T14:40:28.544462+0000 mgr.y (mgr.44103) 140 : cluster [DBG] pgmap v71: 161 pgs: 13 active+undersized, 7 active+undersized+degraded, 141 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: cluster 2026-03-09T14:40:28.544462+0000 mgr.y (mgr.44103) 140 : cluster [DBG] pgmap v71: 161 pgs: 13 active+undersized, 7 active+undersized+degraded, 141 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:28.693232+0000 mgr.y (mgr.44103) 141 : audit [DBG] from='client.34228 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:28.693232+0000 mgr.y (mgr.44103) 141 : audit [DBG] from='client.34228 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:28.889303+0000 mgr.y (mgr.44103) 142 : audit [DBG] from='client.34231 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:28.889303+0000 mgr.y (mgr.44103) 142 : audit [DBG] from='client.34231 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:29.160660+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.107:0/2347531535' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:29.160660+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.107:0/2347531535' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:29.357607+0000 mgr.y (mgr.44103) 143 : audit [DBG] from='client.44265 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:29.357607+0000 mgr.y (mgr.44103) 143 : audit [DBG] from='client.44265 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:29.603357+0000 mon.a (mon.0) 356 : audit [DBG] from='client.? 192.168.123.107:0/2858762065' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:30 vm07 bash[56315]: audit 2026-03-09T14:40:29.603357+0000 mon.a (mon.0) 356 : audit [DBG] from='client.? 192.168.123.107:0/2858762065' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:28.497252+0000 mgr.y (mgr.44103) 139 : audit [DBG] from='client.54158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:28.497252+0000 mgr.y (mgr.44103) 139 : audit [DBG] from='client.54158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: cluster 2026-03-09T14:40:28.544462+0000 mgr.y (mgr.44103) 140 : cluster [DBG] pgmap v71: 161 pgs: 13 active+undersized, 7 active+undersized+degraded, 141 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: cluster 2026-03-09T14:40:28.544462+0000 mgr.y (mgr.44103) 140 : cluster [DBG] pgmap v71: 161 pgs: 13 active+undersized, 7 active+undersized+degraded, 141 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:28.693232+0000 mgr.y (mgr.44103) 141 : audit [DBG] from='client.34228 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:28.693232+0000 mgr.y (mgr.44103) 141 : audit [DBG] from='client.34228 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:28.889303+0000 mgr.y (mgr.44103) 142 : audit [DBG] from='client.34231 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:28.889303+0000 mgr.y (mgr.44103) 142 : audit [DBG] from='client.34231 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:29.160660+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.107:0/2347531535' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:29.160660+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.107:0/2347531535' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:29.357607+0000 mgr.y (mgr.44103) 143 : audit [DBG] from='client.44265 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:29.357607+0000 mgr.y (mgr.44103) 143 : audit [DBG] from='client.44265 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:29.603357+0000 mon.a (mon.0) 356 : audit [DBG] from='client.? 192.168.123.107:0/2858762065' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:40:30.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:30 vm07 bash[55244]: audit 2026-03-09T14:40:29.603357+0000 mon.a (mon.0) 356 : audit [DBG] from='client.? 192.168.123.107:0/2858762065' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:28.497252+0000 mgr.y (mgr.44103) 139 : audit [DBG] from='client.54158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:28.497252+0000 mgr.y (mgr.44103) 139 : audit [DBG] from='client.54158 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: cluster 2026-03-09T14:40:28.544462+0000 mgr.y (mgr.44103) 140 : cluster [DBG] pgmap v71: 161 pgs: 13 active+undersized, 7 active+undersized+degraded, 141 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: cluster 2026-03-09T14:40:28.544462+0000 mgr.y (mgr.44103) 140 : cluster [DBG] pgmap v71: 161 pgs: 13 active+undersized, 7 active+undersized+degraded, 141 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 31/627 objects degraded (4.944%) 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:28.693232+0000 mgr.y (mgr.44103) 141 : audit [DBG] from='client.34228 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:28.693232+0000 mgr.y (mgr.44103) 141 : audit [DBG] from='client.34228 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:28.889303+0000 mgr.y (mgr.44103) 142 : audit [DBG] from='client.34231 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:28.889303+0000 mgr.y (mgr.44103) 142 : audit [DBG] from='client.34231 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:29.160660+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.107:0/2347531535' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:29.160660+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.107:0/2347531535' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:29.357607+0000 mgr.y (mgr.44103) 143 : audit [DBG] from='client.44265 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:29.357607+0000 mgr.y (mgr.44103) 143 : audit [DBG] from='client.44265 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:29.603357+0000 mon.a (mon.0) 356 : audit [DBG] from='client.? 192.168.123.107:0/2858762065' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:40:30.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:30 vm11 bash[43577]: audit 2026-03-09T14:40:29.603357+0000 mon.a (mon.0) 356 : audit [DBG] from='client.? 192.168.123.107:0/2858762065' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:40:31.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:31 vm07 bash[56315]: cluster 2026-03-09T14:40:31.049774+0000 mon.a (mon.0) 357 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 31/627 objects degraded (4.944%), 7 pgs degraded) 2026-03-09T14:40:31.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:31 vm07 bash[56315]: cluster 2026-03-09T14:40:31.049774+0000 mon.a (mon.0) 357 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 31/627 objects degraded (4.944%), 7 pgs degraded) 2026-03-09T14:40:31.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:31 vm07 bash[56315]: cluster 2026-03-09T14:40:31.049794+0000 mon.a (mon.0) 358 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:31.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:31 vm07 bash[56315]: cluster 2026-03-09T14:40:31.049794+0000 mon.a (mon.0) 358 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:31.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:31 vm07 bash[55244]: cluster 2026-03-09T14:40:31.049774+0000 mon.a (mon.0) 357 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 31/627 objects degraded (4.944%), 7 pgs degraded) 2026-03-09T14:40:31.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:31 vm07 bash[55244]: cluster 2026-03-09T14:40:31.049774+0000 mon.a (mon.0) 357 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 31/627 objects degraded (4.944%), 7 pgs degraded) 2026-03-09T14:40:31.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:31 vm07 bash[55244]: cluster 2026-03-09T14:40:31.049794+0000 mon.a (mon.0) 358 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:31.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:31 vm07 bash[55244]: cluster 2026-03-09T14:40:31.049794+0000 mon.a (mon.0) 358 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:31.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:31 vm11 bash[43577]: cluster 2026-03-09T14:40:31.049774+0000 mon.a (mon.0) 357 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 31/627 objects degraded (4.944%), 7 pgs degraded) 2026-03-09T14:40:31.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:31 vm11 bash[43577]: cluster 2026-03-09T14:40:31.049774+0000 mon.a (mon.0) 357 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 31/627 objects degraded (4.944%), 7 pgs degraded) 2026-03-09T14:40:31.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:31 vm11 bash[43577]: cluster 2026-03-09T14:40:31.049794+0000 mon.a (mon.0) 358 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:31.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:31 vm11 bash[43577]: cluster 2026-03-09T14:40:31.049794+0000 mon.a (mon.0) 358 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:32.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:32 vm07 bash[56315]: cluster 2026-03-09T14:40:30.544750+0000 mgr.y (mgr.44103) 144 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 735 B/s rd, 0 op/s 2026-03-09T14:40:32.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:32 vm07 bash[56315]: cluster 2026-03-09T14:40:30.544750+0000 mgr.y (mgr.44103) 144 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 735 B/s rd, 0 op/s 2026-03-09T14:40:32.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:32 vm07 bash[55244]: cluster 2026-03-09T14:40:30.544750+0000 mgr.y (mgr.44103) 144 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 735 B/s rd, 0 op/s 2026-03-09T14:40:32.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:32 vm07 bash[55244]: cluster 2026-03-09T14:40:30.544750+0000 mgr.y (mgr.44103) 144 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 735 B/s rd, 0 op/s 2026-03-09T14:40:32.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:32 vm11 bash[43577]: cluster 2026-03-09T14:40:30.544750+0000 mgr.y (mgr.44103) 144 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 735 B/s rd, 0 op/s 2026-03-09T14:40:32.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:32 vm11 bash[43577]: cluster 2026-03-09T14:40:30.544750+0000 mgr.y (mgr.44103) 144 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 735 B/s rd, 0 op/s 2026-03-09T14:40:33.654 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:40:33 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:40:33] "GET /metrics HTTP/1.1" 200 37980 "" "Prometheus/2.51.0" 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: cluster 2026-03-09T14:40:32.545162+0000 mgr.y (mgr.44103) 145 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: cluster 2026-03-09T14:40:32.545162+0000 mgr.y (mgr.44103) 145 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:32.614839+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:32.614839+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.482535+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.482535+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.525554+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.525554+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.526565+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.526565+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.527466+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.527466+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.562306+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.562306+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.603196+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.603196+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.604544+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.604544+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.605542+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.605542+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.606410+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.606410+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.607338+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:34.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:33 vm11 bash[43577]: audit 2026-03-09T14:40:33.607338+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: cluster 2026-03-09T14:40:32.545162+0000 mgr.y (mgr.44103) 145 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: cluster 2026-03-09T14:40:32.545162+0000 mgr.y (mgr.44103) 145 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:32.614839+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:32.614839+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.482535+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.482535+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.525554+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.525554+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.526565+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.526565+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.527466+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.527466+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.562306+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.562306+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.603196+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.603196+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.604544+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.604544+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.605542+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.605542+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.606410+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.606410+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.607338+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:33 vm07 bash[55244]: audit 2026-03-09T14:40:33.607338+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: cluster 2026-03-09T14:40:32.545162+0000 mgr.y (mgr.44103) 145 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: cluster 2026-03-09T14:40:32.545162+0000 mgr.y (mgr.44103) 145 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:32.614839+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:32.614839+0000 mon.a (mon.0) 359 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.482535+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.482535+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.525554+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.525554+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.526565+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.526565+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.527466+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.527466+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.562306+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.562306+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.603196+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.603196+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.604544+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.604544+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.605542+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.605542+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.606410+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.606410+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.607338+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:34.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:33 vm07 bash[56315]: audit 2026-03-09T14:40:33.607338+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:34.454 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:34 vm11 bash[41290]: ts=2026-03-09T14:40:34.146Z caller=alerting.go:391 level=warn component="rule manager" alert="unsupported value type" msg="Expanding alert template failed" err="error executing template __alert_CephOSDDown: template: __alert_CephOSDDown:1:358: executing \"__alert_CephOSDDown\" at : error calling query: found duplicate series for the match group {ceph_daemon=\"osd.1\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" data="unsupported value type" 2026-03-09T14:40:34.454 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:34 vm11 bash[41290]: ts=2026-03-09T14:40:34.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.1\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.1\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:35.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: audit 2026-03-09T14:40:33.607656+0000 mgr.y (mgr.44103) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:35.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: audit 2026-03-09T14:40:33.607656+0000 mgr.y (mgr.44103) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:35.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: cephadm 2026-03-09T14:40:33.608241+0000 mgr.y (mgr.44103) 147 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T14:40:35.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: cephadm 2026-03-09T14:40:33.608241+0000 mgr.y (mgr.44103) 147 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T14:40:35.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: cephadm 2026-03-09T14:40:34.069402+0000 mgr.y (mgr.44103) 148 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: cephadm 2026-03-09T14:40:34.069402+0000 mgr.y (mgr.44103) 148 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: audit 2026-03-09T14:40:34.170588+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: audit 2026-03-09T14:40:34.170588+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: audit 2026-03-09T14:40:34.172418+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: audit 2026-03-09T14:40:34.172418+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: audit 2026-03-09T14:40:34.172986+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: audit 2026-03-09T14:40:34.172986+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: cephadm 2026-03-09T14:40:34.174489+0000 mgr.y (mgr.44103) 149 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:35 vm07 bash[55244]: cephadm 2026-03-09T14:40:34.174489+0000 mgr.y (mgr.44103) 149 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: audit 2026-03-09T14:40:33.607656+0000 mgr.y (mgr.44103) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: audit 2026-03-09T14:40:33.607656+0000 mgr.y (mgr.44103) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: cephadm 2026-03-09T14:40:33.608241+0000 mgr.y (mgr.44103) 147 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: cephadm 2026-03-09T14:40:33.608241+0000 mgr.y (mgr.44103) 147 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: cephadm 2026-03-09T14:40:34.069402+0000 mgr.y (mgr.44103) 148 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: cephadm 2026-03-09T14:40:34.069402+0000 mgr.y (mgr.44103) 148 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: audit 2026-03-09T14:40:34.170588+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: audit 2026-03-09T14:40:34.170588+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: audit 2026-03-09T14:40:34.172418+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: audit 2026-03-09T14:40:34.172418+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: audit 2026-03-09T14:40:34.172986+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: audit 2026-03-09T14:40:34.172986+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: cephadm 2026-03-09T14:40:34.174489+0000 mgr.y (mgr.44103) 149 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-09T14:40:35.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:35 vm07 bash[56315]: cephadm 2026-03-09T14:40:34.174489+0000 mgr.y (mgr.44103) 149 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-09T14:40:35.681 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:40:35 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:35.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:40:35 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:35.681 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:35 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:35.681 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:35 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:35.681 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:40:35 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:35.681 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:40:35 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:35.682 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:40:35 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:35.682 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:35 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:35.682 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:35 vm11 systemd[1]: Stopping Ceph osd.4 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: audit 2026-03-09T14:40:33.607656+0000 mgr.y (mgr.44103) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: audit 2026-03-09T14:40:33.607656+0000 mgr.y (mgr.44103) 146 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: cephadm 2026-03-09T14:40:33.608241+0000 mgr.y (mgr.44103) 147 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: cephadm 2026-03-09T14:40:33.608241+0000 mgr.y (mgr.44103) 147 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: cephadm 2026-03-09T14:40:34.069402+0000 mgr.y (mgr.44103) 148 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: cephadm 2026-03-09T14:40:34.069402+0000 mgr.y (mgr.44103) 148 : cephadm [INF] Upgrade: Updating osd.4 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: audit 2026-03-09T14:40:34.170588+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: audit 2026-03-09T14:40:34.170588+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: audit 2026-03-09T14:40:34.172418+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: audit 2026-03-09T14:40:34.172418+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: audit 2026-03-09T14:40:34.172986+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: audit 2026-03-09T14:40:34.172986+0000 mon.a (mon.0) 372 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: cephadm 2026-03-09T14:40:34.174489+0000 mgr.y (mgr.44103) 149 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 bash[43577]: cephadm 2026-03-09T14:40:34.174489+0000 mgr.y (mgr.44103) 149 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-09T14:40:35.682 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:35 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:36.003 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:35 vm11 bash[20835]: debug 2026-03-09T14:40:35.683+0000 7fbbcdb49700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:40:36.003 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:35 vm11 bash[20835]: debug 2026-03-09T14:40:35.683+0000 7fbbcdb49700 -1 osd.4 112 *** Got signal Terminated *** 2026-03-09T14:40:36.003 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:35 vm11 bash[20835]: debug 2026-03-09T14:40:35.683+0000 7fbbcdb49700 -1 osd.4 112 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:40:36.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:36 vm07 bash[55244]: cluster 2026-03-09T14:40:34.545655+0000 mgr.y (mgr.44103) 150 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:36.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:36 vm07 bash[55244]: cluster 2026-03-09T14:40:34.545655+0000 mgr.y (mgr.44103) 150 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:36.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:36 vm07 bash[55244]: cluster 2026-03-09T14:40:35.691213+0000 mon.a (mon.0) 373 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T14:40:36.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:36 vm07 bash[55244]: cluster 2026-03-09T14:40:35.691213+0000 mon.a (mon.0) 373 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T14:40:36.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:36 vm07 bash[56315]: cluster 2026-03-09T14:40:34.545655+0000 mgr.y (mgr.44103) 150 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:36.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:36 vm07 bash[56315]: cluster 2026-03-09T14:40:34.545655+0000 mgr.y (mgr.44103) 150 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:36.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:36 vm07 bash[56315]: cluster 2026-03-09T14:40:35.691213+0000 mon.a (mon.0) 373 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T14:40:36.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:36 vm07 bash[56315]: cluster 2026-03-09T14:40:35.691213+0000 mon.a (mon.0) 373 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T14:40:36.753 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:36 vm11 bash[45267]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-4 2026-03-09T14:40:36.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:36 vm11 bash[43577]: cluster 2026-03-09T14:40:34.545655+0000 mgr.y (mgr.44103) 150 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:36.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:36 vm11 bash[43577]: cluster 2026-03-09T14:40:34.545655+0000 mgr.y (mgr.44103) 150 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:40:36.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:36 vm11 bash[43577]: cluster 2026-03-09T14:40:35.691213+0000 mon.a (mon.0) 373 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T14:40:36.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:36 vm11 bash[43577]: cluster 2026-03-09T14:40:35.691213+0000 mon.a (mon.0) 373 : cluster [INF] osd.4 marked itself down and dead 2026-03-09T14:40:37.020 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:37.020 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:36 vm11 bash[41290]: ts=2026-03-09T14:40:36.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:37.020 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:37.021 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:37.021 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:37.021 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.4.service: Deactivated successfully. 2026-03-09T14:40:37.021 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: Stopped Ceph osd.4 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:40:37.021 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:37.021 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:37 vm11 systemd[1]: Started Ceph osd.4 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:40:37.021 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:37.021 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:37.021 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:37.021 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:40:36 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:37.437 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:37 vm11 bash[43577]: cluster 2026-03-09T14:40:36.402323+0000 mon.a (mon.0) 374 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:37.437 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:37 vm11 bash[43577]: cluster 2026-03-09T14:40:36.402323+0000 mon.a (mon.0) 374 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:37.437 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:37 vm11 bash[45470]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:37.437 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:37 vm11 bash[45470]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:37 vm11 bash[43577]: cluster 2026-03-09T14:40:36.438868+0000 mon.a (mon.0) 375 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T14:40:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:37 vm11 bash[43577]: cluster 2026-03-09T14:40:36.438868+0000 mon.a (mon.0) 375 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T14:40:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:37 vm11 bash[43577]: audit 2026-03-09T14:40:37.058487+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:37 vm11 bash[43577]: audit 2026-03-09T14:40:37.058487+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:37 vm11 bash[43577]: audit 2026-03-09T14:40:37.067308+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:37 vm11 bash[43577]: audit 2026-03-09T14:40:37.067308+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:37 vm07 bash[56315]: cluster 2026-03-09T14:40:36.402323+0000 mon.a (mon.0) 374 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:37 vm07 bash[56315]: cluster 2026-03-09T14:40:36.402323+0000 mon.a (mon.0) 374 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:37 vm07 bash[56315]: cluster 2026-03-09T14:40:36.438868+0000 mon.a (mon.0) 375 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:37 vm07 bash[56315]: cluster 2026-03-09T14:40:36.438868+0000 mon.a (mon.0) 375 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:37 vm07 bash[56315]: audit 2026-03-09T14:40:37.058487+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:37 vm07 bash[56315]: audit 2026-03-09T14:40:37.058487+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:37 vm07 bash[56315]: audit 2026-03-09T14:40:37.067308+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:37 vm07 bash[56315]: audit 2026-03-09T14:40:37.067308+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:37 vm07 bash[55244]: cluster 2026-03-09T14:40:36.402323+0000 mon.a (mon.0) 374 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:37 vm07 bash[55244]: cluster 2026-03-09T14:40:36.402323+0000 mon.a (mon.0) 374 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:37 vm07 bash[55244]: cluster 2026-03-09T14:40:36.438868+0000 mon.a (mon.0) 375 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:37 vm07 bash[55244]: cluster 2026-03-09T14:40:36.438868+0000 mon.a (mon.0) 375 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:37 vm07 bash[55244]: audit 2026-03-09T14:40:37.058487+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:37 vm07 bash[55244]: audit 2026-03-09T14:40:37.058487+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:37 vm07 bash[55244]: audit 2026-03-09T14:40:37.067308+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:37.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:37 vm07 bash[55244]: audit 2026-03-09T14:40:37.067308+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:38.419 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:38 vm11 bash[45470]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T14:40:38.419 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:38 vm11 bash[45470]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:38.419 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:38 vm11 bash[45470]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:38.419 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:38 vm11 bash[45470]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T14:40:38.419 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:38 vm11 bash[45470]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-f9db0d8d-37ec-489e-9f28-1aa248e81270/osd-block-8e6cc346-4281-49a1-9886-18c25e9addfc --path /var/lib/ceph/osd/ceph-4 --no-mon-config 2026-03-09T14:40:38.753 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:38 vm11 bash[45470]: Running command: /usr/bin/ln -snf /dev/ceph-f9db0d8d-37ec-489e-9f28-1aa248e81270/osd-block-8e6cc346-4281-49a1-9886-18c25e9addfc /var/lib/ceph/osd/ceph-4/block 2026-03-09T14:40:38.753 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:38 vm11 bash[45470]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block 2026-03-09T14:40:38.753 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:38 vm11 bash[45470]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-09T14:40:38.753 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:38 vm11 bash[45470]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-09T14:40:38.753 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:38 vm11 bash[45470]: --> ceph-volume lvm activate successful for osd ID: 4 2026-03-09T14:40:38.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:38 vm11 bash[43577]: cluster 2026-03-09T14:40:36.546128+0000 mgr.y (mgr.44103) 151 : cluster [DBG] pgmap v76: 161 pgs: 9 peering, 21 stale+active+clean, 131 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 204 B/s rd, 0 op/s 2026-03-09T14:40:38.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:38 vm11 bash[43577]: cluster 2026-03-09T14:40:36.546128+0000 mgr.y (mgr.44103) 151 : cluster [DBG] pgmap v76: 161 pgs: 9 peering, 21 stale+active+clean, 131 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 204 B/s rd, 0 op/s 2026-03-09T14:40:38.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:38 vm11 bash[43577]: cluster 2026-03-09T14:40:37.417747+0000 mon.a (mon.0) 378 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T14:40:38.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:38 vm11 bash[43577]: cluster 2026-03-09T14:40:37.417747+0000 mon.a (mon.0) 378 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T14:40:38.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:38 vm11 bash[43577]: cluster 2026-03-09T14:40:37.458892+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T14:40:38.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:38 vm11 bash[43577]: cluster 2026-03-09T14:40:37.458892+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T14:40:38.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:38 vm11 bash[43577]: audit 2026-03-09T14:40:37.578783+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:38.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:38 vm11 bash[43577]: audit 2026-03-09T14:40:37.578783+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:38.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:38 vm11 bash[43577]: audit 2026-03-09T14:40:37.579583+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:38.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:38 vm11 bash[43577]: audit 2026-03-09T14:40:37.579583+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:38.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:38 vm07 bash[56315]: cluster 2026-03-09T14:40:36.546128+0000 mgr.y (mgr.44103) 151 : cluster [DBG] pgmap v76: 161 pgs: 9 peering, 21 stale+active+clean, 131 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 204 B/s rd, 0 op/s 2026-03-09T14:40:38.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:38 vm07 bash[56315]: cluster 2026-03-09T14:40:36.546128+0000 mgr.y (mgr.44103) 151 : cluster [DBG] pgmap v76: 161 pgs: 9 peering, 21 stale+active+clean, 131 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 204 B/s rd, 0 op/s 2026-03-09T14:40:38.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:38 vm07 bash[56315]: cluster 2026-03-09T14:40:37.417747+0000 mon.a (mon.0) 378 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T14:40:38.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:38 vm07 bash[56315]: cluster 2026-03-09T14:40:37.417747+0000 mon.a (mon.0) 378 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T14:40:38.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:38 vm07 bash[56315]: cluster 2026-03-09T14:40:37.458892+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T14:40:38.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:38 vm07 bash[56315]: cluster 2026-03-09T14:40:37.458892+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:38 vm07 bash[56315]: audit 2026-03-09T14:40:37.578783+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:38 vm07 bash[56315]: audit 2026-03-09T14:40:37.578783+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:38 vm07 bash[56315]: audit 2026-03-09T14:40:37.579583+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:38 vm07 bash[56315]: audit 2026-03-09T14:40:37.579583+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:38 vm07 bash[55244]: cluster 2026-03-09T14:40:36.546128+0000 mgr.y (mgr.44103) 151 : cluster [DBG] pgmap v76: 161 pgs: 9 peering, 21 stale+active+clean, 131 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 204 B/s rd, 0 op/s 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:38 vm07 bash[55244]: cluster 2026-03-09T14:40:36.546128+0000 mgr.y (mgr.44103) 151 : cluster [DBG] pgmap v76: 161 pgs: 9 peering, 21 stale+active+clean, 131 active+clean; 457 KiB data, 182 MiB used, 160 GiB / 160 GiB avail; 204 B/s rd, 0 op/s 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:38 vm07 bash[55244]: cluster 2026-03-09T14:40:37.417747+0000 mon.a (mon.0) 378 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:38 vm07 bash[55244]: cluster 2026-03-09T14:40:37.417747+0000 mon.a (mon.0) 378 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:38 vm07 bash[55244]: cluster 2026-03-09T14:40:37.458892+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:38 vm07 bash[55244]: cluster 2026-03-09T14:40:37.458892+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:38 vm07 bash[55244]: audit 2026-03-09T14:40:37.578783+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:38 vm07 bash[55244]: audit 2026-03-09T14:40:37.578783+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:38 vm07 bash[55244]: audit 2026-03-09T14:40:37.579583+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:38.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:38 vm07 bash[55244]: audit 2026-03-09T14:40:37.579583+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:39 vm07 bash[56315]: audit 2026-03-09T14:40:37.530167+0000 mgr.y (mgr.44103) 152 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:39 vm07 bash[56315]: audit 2026-03-09T14:40:37.530167+0000 mgr.y (mgr.44103) 152 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:39 vm07 bash[55244]: audit 2026-03-09T14:40:37.530167+0000 mgr.y (mgr.44103) 152 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:39 vm07 bash[55244]: audit 2026-03-09T14:40:37.530167+0000 mgr.y (mgr.44103) 152 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:40.003 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:39 vm11 bash[45819]: debug 2026-03-09T14:40:39.547+0000 7fd77d25d740 -1 Falling back to public interface 2026-03-09T14:40:40.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:39 vm11 bash[43577]: audit 2026-03-09T14:40:37.530167+0000 mgr.y (mgr.44103) 152 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:40.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:39 vm11 bash[43577]: audit 2026-03-09T14:40:37.530167+0000 mgr.y (mgr.44103) 152 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:40.753 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:40 vm11 bash[45819]: debug 2026-03-09T14:40:40.495+0000 7fd77d25d740 -1 osd.4 0 read_superblock omap replica is missing. 2026-03-09T14:40:40.753 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:40 vm11 bash[45819]: debug 2026-03-09T14:40:40.507+0000 7fd77d25d740 -1 osd.4 112 log_to_monitors true 2026-03-09T14:40:40.753 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:40 vm11 bash[45819]: debug 2026-03-09T14:40:40.647+0000 7fd775008640 -1 osd.4 112 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:40:40.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:40 vm11 bash[43577]: cluster 2026-03-09T14:40:38.546634+0000 mgr.y (mgr.44103) 153 : cluster [DBG] pgmap v78: 161 pgs: 28 active+undersized, 9 peering, 6 stale+active+clean, 14 active+undersized+degraded, 104 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-09T14:40:40.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:40 vm11 bash[43577]: cluster 2026-03-09T14:40:38.546634+0000 mgr.y (mgr.44103) 153 : cluster [DBG] pgmap v78: 161 pgs: 28 active+undersized, 9 peering, 6 stale+active+clean, 14 active+undersized+degraded, 104 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-09T14:40:40.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:40 vm11 bash[43577]: cluster 2026-03-09T14:40:39.448102+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 61/627 objects degraded (9.729%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:40.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:40 vm11 bash[43577]: cluster 2026-03-09T14:40:39.448102+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 61/627 objects degraded (9.729%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:40.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:40 vm11 bash[43577]: audit 2026-03-09T14:40:40.518461+0000 mon.c (mon.1) 17 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:40 vm11 bash[43577]: audit 2026-03-09T14:40:40.518461+0000 mon.c (mon.1) 17 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:40 vm11 bash[43577]: audit 2026-03-09T14:40:40.518882+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:40 vm11 bash[43577]: audit 2026-03-09T14:40:40.518882+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:40 vm07 bash[56315]: cluster 2026-03-09T14:40:38.546634+0000 mgr.y (mgr.44103) 153 : cluster [DBG] pgmap v78: 161 pgs: 28 active+undersized, 9 peering, 6 stale+active+clean, 14 active+undersized+degraded, 104 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:40 vm07 bash[56315]: cluster 2026-03-09T14:40:38.546634+0000 mgr.y (mgr.44103) 153 : cluster [DBG] pgmap v78: 161 pgs: 28 active+undersized, 9 peering, 6 stale+active+clean, 14 active+undersized+degraded, 104 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:40 vm07 bash[56315]: cluster 2026-03-09T14:40:39.448102+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 61/627 objects degraded (9.729%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:40 vm07 bash[56315]: cluster 2026-03-09T14:40:39.448102+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 61/627 objects degraded (9.729%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:40 vm07 bash[56315]: audit 2026-03-09T14:40:40.518461+0000 mon.c (mon.1) 17 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:40 vm07 bash[56315]: audit 2026-03-09T14:40:40.518461+0000 mon.c (mon.1) 17 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:40 vm07 bash[56315]: audit 2026-03-09T14:40:40.518882+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:40 vm07 bash[56315]: audit 2026-03-09T14:40:40.518882+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:40 vm07 bash[55244]: cluster 2026-03-09T14:40:38.546634+0000 mgr.y (mgr.44103) 153 : cluster [DBG] pgmap v78: 161 pgs: 28 active+undersized, 9 peering, 6 stale+active+clean, 14 active+undersized+degraded, 104 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:40 vm07 bash[55244]: cluster 2026-03-09T14:40:38.546634+0000 mgr.y (mgr.44103) 153 : cluster [DBG] pgmap v78: 161 pgs: 28 active+undersized, 9 peering, 6 stale+active+clean, 14 active+undersized+degraded, 104 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:40 vm07 bash[55244]: cluster 2026-03-09T14:40:39.448102+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 61/627 objects degraded (9.729%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:40.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:40 vm07 bash[55244]: cluster 2026-03-09T14:40:39.448102+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 61/627 objects degraded (9.729%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:40.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:40 vm07 bash[55244]: audit 2026-03-09T14:40:40.518461+0000 mon.c (mon.1) 17 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:40 vm07 bash[55244]: audit 2026-03-09T14:40:40.518461+0000 mon.c (mon.1) 17 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:40 vm07 bash[55244]: audit 2026-03-09T14:40:40.518882+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:40.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:40 vm07 bash[55244]: audit 2026-03-09T14:40:40.518882+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:41 vm07 bash[56315]: cluster 2026-03-09T14:40:40.547068+0000 mgr.y (mgr.44103) 154 : cluster [DBG] pgmap v79: 161 pgs: 35 active+undersized, 9 peering, 3 stale+active+clean, 18 active+undersized+degraded, 96 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:41 vm07 bash[56315]: cluster 2026-03-09T14:40:40.547068+0000 mgr.y (mgr.44103) 154 : cluster [DBG] pgmap v79: 161 pgs: 35 active+undersized, 9 peering, 3 stale+active+clean, 18 active+undersized+degraded, 96 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:41 vm07 bash[56315]: audit 2026-03-09T14:40:40.618175+0000 mon.a (mon.0) 384 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:41 vm07 bash[56315]: audit 2026-03-09T14:40:40.618175+0000 mon.a (mon.0) 384 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:41 vm07 bash[56315]: cluster 2026-03-09T14:40:40.625035+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e115: 8 total, 7 up, 8 in 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:41 vm07 bash[56315]: cluster 2026-03-09T14:40:40.625035+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e115: 8 total, 7 up, 8 in 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:41 vm07 bash[56315]: audit 2026-03-09T14:40:40.627211+0000 mon.c (mon.1) 18 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:41 vm07 bash[56315]: audit 2026-03-09T14:40:40.627211+0000 mon.c (mon.1) 18 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:41 vm07 bash[56315]: audit 2026-03-09T14:40:40.627447+0000 mon.a (mon.0) 386 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:41 vm07 bash[56315]: audit 2026-03-09T14:40:40.627447+0000 mon.a (mon.0) 386 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:41.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:41 vm07 bash[55244]: cluster 2026-03-09T14:40:40.547068+0000 mgr.y (mgr.44103) 154 : cluster [DBG] pgmap v79: 161 pgs: 35 active+undersized, 9 peering, 3 stale+active+clean, 18 active+undersized+degraded, 96 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-09T14:40:41.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:41 vm07 bash[55244]: cluster 2026-03-09T14:40:40.547068+0000 mgr.y (mgr.44103) 154 : cluster [DBG] pgmap v79: 161 pgs: 35 active+undersized, 9 peering, 3 stale+active+clean, 18 active+undersized+degraded, 96 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-09T14:40:41.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:41 vm07 bash[55244]: audit 2026-03-09T14:40:40.618175+0000 mon.a (mon.0) 384 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:40:41.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:41 vm07 bash[55244]: audit 2026-03-09T14:40:40.618175+0000 mon.a (mon.0) 384 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:40:41.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:41 vm07 bash[55244]: cluster 2026-03-09T14:40:40.625035+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e115: 8 total, 7 up, 8 in 2026-03-09T14:40:41.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:41 vm07 bash[55244]: cluster 2026-03-09T14:40:40.625035+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e115: 8 total, 7 up, 8 in 2026-03-09T14:40:41.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:41 vm07 bash[55244]: audit 2026-03-09T14:40:40.627211+0000 mon.c (mon.1) 18 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:41.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:41 vm07 bash[55244]: audit 2026-03-09T14:40:40.627211+0000 mon.c (mon.1) 18 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:41.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:41 vm07 bash[55244]: audit 2026-03-09T14:40:40.627447+0000 mon.a (mon.0) 386 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:41.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:41 vm07 bash[55244]: audit 2026-03-09T14:40:40.627447+0000 mon.a (mon.0) 386 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:41 vm11 bash[43577]: cluster 2026-03-09T14:40:40.547068+0000 mgr.y (mgr.44103) 154 : cluster [DBG] pgmap v79: 161 pgs: 35 active+undersized, 9 peering, 3 stale+active+clean, 18 active+undersized+degraded, 96 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-09T14:40:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:41 vm11 bash[43577]: cluster 2026-03-09T14:40:40.547068+0000 mgr.y (mgr.44103) 154 : cluster [DBG] pgmap v79: 161 pgs: 35 active+undersized, 9 peering, 3 stale+active+clean, 18 active+undersized+degraded, 96 active+clean; 457 KiB data, 183 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-09T14:40:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:41 vm11 bash[43577]: audit 2026-03-09T14:40:40.618175+0000 mon.a (mon.0) 384 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:40:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:41 vm11 bash[43577]: audit 2026-03-09T14:40:40.618175+0000 mon.a (mon.0) 384 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:40:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:41 vm11 bash[43577]: cluster 2026-03-09T14:40:40.625035+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e115: 8 total, 7 up, 8 in 2026-03-09T14:40:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:41 vm11 bash[43577]: cluster 2026-03-09T14:40:40.625035+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e115: 8 total, 7 up, 8 in 2026-03-09T14:40:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:41 vm11 bash[43577]: audit 2026-03-09T14:40:40.627211+0000 mon.c (mon.1) 18 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:41 vm11 bash[43577]: audit 2026-03-09T14:40:40.627211+0000 mon.c (mon.1) 18 : audit [INF] from='osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:41 vm11 bash[43577]: audit 2026-03-09T14:40:40.627447+0000 mon.a (mon.0) 386 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:41 vm11 bash[43577]: audit 2026-03-09T14:40:40.627447+0000 mon.a (mon.0) 386 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:40:43.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:42 vm11 bash[43577]: cluster 2026-03-09T14:40:41.618418+0000 mon.a (mon.0) 387 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:43.005 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:42 vm11 bash[43577]: cluster 2026-03-09T14:40:41.618418+0000 mon.a (mon.0) 387 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:43.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:42 vm11 bash[43577]: cluster 2026-03-09T14:40:41.649767+0000 mon.a (mon.0) 388 : cluster [INF] osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426] boot 2026-03-09T14:40:43.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:42 vm11 bash[43577]: cluster 2026-03-09T14:40:41.649767+0000 mon.a (mon.0) 388 : cluster [INF] osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426] boot 2026-03-09T14:40:43.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:42 vm11 bash[43577]: cluster 2026-03-09T14:40:41.649804+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T14:40:43.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:42 vm11 bash[43577]: cluster 2026-03-09T14:40:41.649804+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T14:40:43.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:42 vm11 bash[43577]: audit 2026-03-09T14:40:41.650777+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:40:43.006 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:42 vm11 bash[43577]: audit 2026-03-09T14:40:41.650777+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:40:43.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:42 vm07 bash[56315]: cluster 2026-03-09T14:40:41.618418+0000 mon.a (mon.0) 387 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:43.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:42 vm07 bash[56315]: cluster 2026-03-09T14:40:41.618418+0000 mon.a (mon.0) 387 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:43.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:42 vm07 bash[56315]: cluster 2026-03-09T14:40:41.649767+0000 mon.a (mon.0) 388 : cluster [INF] osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426] boot 2026-03-09T14:40:43.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:42 vm07 bash[56315]: cluster 2026-03-09T14:40:41.649767+0000 mon.a (mon.0) 388 : cluster [INF] osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426] boot 2026-03-09T14:40:43.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:42 vm07 bash[56315]: cluster 2026-03-09T14:40:41.649804+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T14:40:43.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:42 vm07 bash[56315]: cluster 2026-03-09T14:40:41.649804+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T14:40:43.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:42 vm07 bash[56315]: audit 2026-03-09T14:40:41.650777+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:40:43.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:42 vm07 bash[56315]: audit 2026-03-09T14:40:41.650777+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:40:43.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:42 vm07 bash[55244]: cluster 2026-03-09T14:40:41.618418+0000 mon.a (mon.0) 387 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:43.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:42 vm07 bash[55244]: cluster 2026-03-09T14:40:41.618418+0000 mon.a (mon.0) 387 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:40:43.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:42 vm07 bash[55244]: cluster 2026-03-09T14:40:41.649767+0000 mon.a (mon.0) 388 : cluster [INF] osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426] boot 2026-03-09T14:40:43.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:42 vm07 bash[55244]: cluster 2026-03-09T14:40:41.649767+0000 mon.a (mon.0) 388 : cluster [INF] osd.4 [v2:192.168.123.111:6800/1511596426,v1:192.168.123.111:6801/1511596426] boot 2026-03-09T14:40:43.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:42 vm07 bash[55244]: cluster 2026-03-09T14:40:41.649804+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T14:40:43.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:42 vm07 bash[55244]: cluster 2026-03-09T14:40:41.649804+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-09T14:40:43.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:42 vm07 bash[55244]: audit 2026-03-09T14:40:41.650777+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:40:43.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:42 vm07 bash[55244]: audit 2026-03-09T14:40:41.650777+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:40:43.705 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:40:43 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:40:43] "GET /metrics HTTP/1.1" 200 38058 "" "Prometheus/2.51.0" 2026-03-09T14:40:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:43 vm11 bash[43577]: cluster 2026-03-09T14:40:42.547542+0000 mgr.y (mgr.44103) 155 : cluster [DBG] pgmap v82: 161 pgs: 43 active+undersized, 25 active+undersized+degraded, 93 active+clean; 457 KiB data, 201 MiB used, 160 GiB / 160 GiB avail; 105/627 objects degraded (16.746%) 2026-03-09T14:40:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:43 vm11 bash[43577]: cluster 2026-03-09T14:40:42.547542+0000 mgr.y (mgr.44103) 155 : cluster [DBG] pgmap v82: 161 pgs: 43 active+undersized, 25 active+undersized+degraded, 93 active+clean; 457 KiB data, 201 MiB used, 160 GiB / 160 GiB avail; 105/627 objects degraded (16.746%) 2026-03-09T14:40:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:43 vm11 bash[43577]: cluster 2026-03-09T14:40:42.686047+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T14:40:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:43 vm11 bash[43577]: cluster 2026-03-09T14:40:42.686047+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T14:40:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:43 vm11 bash[43577]: cluster 2026-03-09T14:40:42.700389+0000 mon.a (mon.0) 392 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T14:40:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:43 vm11 bash[43577]: cluster 2026-03-09T14:40:42.700389+0000 mon.a (mon.0) 392 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T14:40:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:43 vm11 bash[43577]: audit 2026-03-09T14:40:43.493720+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:43 vm11 bash[43577]: audit 2026-03-09T14:40:43.493720+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:43 vm11 bash[43577]: audit 2026-03-09T14:40:43.498568+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:43 vm11 bash[43577]: audit 2026-03-09T14:40:43.498568+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:43 vm07 bash[55244]: cluster 2026-03-09T14:40:42.547542+0000 mgr.y (mgr.44103) 155 : cluster [DBG] pgmap v82: 161 pgs: 43 active+undersized, 25 active+undersized+degraded, 93 active+clean; 457 KiB data, 201 MiB used, 160 GiB / 160 GiB avail; 105/627 objects degraded (16.746%) 2026-03-09T14:40:44.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:43 vm07 bash[55244]: cluster 2026-03-09T14:40:42.547542+0000 mgr.y (mgr.44103) 155 : cluster [DBG] pgmap v82: 161 pgs: 43 active+undersized, 25 active+undersized+degraded, 93 active+clean; 457 KiB data, 201 MiB used, 160 GiB / 160 GiB avail; 105/627 objects degraded (16.746%) 2026-03-09T14:40:44.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:43 vm07 bash[55244]: cluster 2026-03-09T14:40:42.686047+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T14:40:44.166 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:43 vm07 bash[55244]: cluster 2026-03-09T14:40:42.686047+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:43 vm07 bash[55244]: cluster 2026-03-09T14:40:42.700389+0000 mon.a (mon.0) 392 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:43 vm07 bash[55244]: cluster 2026-03-09T14:40:42.700389+0000 mon.a (mon.0) 392 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:43 vm07 bash[55244]: audit 2026-03-09T14:40:43.493720+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:43 vm07 bash[55244]: audit 2026-03-09T14:40:43.493720+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:43 vm07 bash[55244]: audit 2026-03-09T14:40:43.498568+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:43 vm07 bash[55244]: audit 2026-03-09T14:40:43.498568+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:43 vm07 bash[56315]: cluster 2026-03-09T14:40:42.547542+0000 mgr.y (mgr.44103) 155 : cluster [DBG] pgmap v82: 161 pgs: 43 active+undersized, 25 active+undersized+degraded, 93 active+clean; 457 KiB data, 201 MiB used, 160 GiB / 160 GiB avail; 105/627 objects degraded (16.746%) 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:43 vm07 bash[56315]: cluster 2026-03-09T14:40:42.547542+0000 mgr.y (mgr.44103) 155 : cluster [DBG] pgmap v82: 161 pgs: 43 active+undersized, 25 active+undersized+degraded, 93 active+clean; 457 KiB data, 201 MiB used, 160 GiB / 160 GiB avail; 105/627 objects degraded (16.746%) 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:43 vm07 bash[56315]: cluster 2026-03-09T14:40:42.686047+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:43 vm07 bash[56315]: cluster 2026-03-09T14:40:42.686047+0000 mon.a (mon.0) 391 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:43 vm07 bash[56315]: cluster 2026-03-09T14:40:42.700389+0000 mon.a (mon.0) 392 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:43 vm07 bash[56315]: cluster 2026-03-09T14:40:42.700389+0000 mon.a (mon.0) 392 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:43 vm07 bash[56315]: audit 2026-03-09T14:40:43.493720+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:43 vm07 bash[56315]: audit 2026-03-09T14:40:43.493720+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:43 vm07 bash[56315]: audit 2026-03-09T14:40:43.498568+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.167 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:43 vm07 bash[56315]: audit 2026-03-09T14:40:43.498568+0000 mon.a (mon.0) 394 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:44.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:44 vm11 bash[41290]: ts=2026-03-09T14:40:44.146Z caller=alerting.go:391 level=warn component="rule manager" alert="unsupported value type" msg="Expanding alert template failed" err="error executing template __alert_CephOSDDown: template: __alert_CephOSDDown:1:358: executing \"__alert_CephOSDDown\" at : error calling query: found duplicate series for the match group {ceph_daemon=\"osd.4\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}];many-to-many matching not allowed: matching labels must be unique on one side" data="unsupported value type" 2026-03-09T14:40:44.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:44 vm11 bash[41290]: ts=2026-03-09T14:40:44.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.4\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.4\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:45.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:45 vm11 bash[43577]: audit 2026-03-09T14:40:44.255380+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:45 vm11 bash[43577]: audit 2026-03-09T14:40:44.255380+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:45 vm11 bash[43577]: audit 2026-03-09T14:40:44.318788+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:45 vm11 bash[43577]: audit 2026-03-09T14:40:44.318788+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:45 vm07 bash[56315]: audit 2026-03-09T14:40:44.255380+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:45 vm07 bash[56315]: audit 2026-03-09T14:40:44.255380+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:45 vm07 bash[56315]: audit 2026-03-09T14:40:44.318788+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:45 vm07 bash[56315]: audit 2026-03-09T14:40:44.318788+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:45 vm07 bash[55244]: audit 2026-03-09T14:40:44.255380+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:45 vm07 bash[55244]: audit 2026-03-09T14:40:44.255380+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:45 vm07 bash[55244]: audit 2026-03-09T14:40:44.318788+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:45.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:45 vm07 bash[55244]: audit 2026-03-09T14:40:44.318788+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:46.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:46 vm11 bash[43577]: cluster 2026-03-09T14:40:44.548090+0000 mgr.y (mgr.44103) 156 : cluster [DBG] pgmap v84: 161 pgs: 20 active+undersized, 13 active+undersized+degraded, 128 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-09T14:40:46.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:46 vm11 bash[43577]: cluster 2026-03-09T14:40:44.548090+0000 mgr.y (mgr.44103) 156 : cluster [DBG] pgmap v84: 161 pgs: 20 active+undersized, 13 active+undersized+degraded, 128 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-09T14:40:46.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:46 vm11 bash[43577]: cluster 2026-03-09T14:40:45.316283+0000 mon.a (mon.0) 397 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:46.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:46 vm11 bash[43577]: cluster 2026-03-09T14:40:45.316283+0000 mon.a (mon.0) 397 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:46.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:46 vm07 bash[55244]: cluster 2026-03-09T14:40:44.548090+0000 mgr.y (mgr.44103) 156 : cluster [DBG] pgmap v84: 161 pgs: 20 active+undersized, 13 active+undersized+degraded, 128 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-09T14:40:46.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:46 vm07 bash[55244]: cluster 2026-03-09T14:40:44.548090+0000 mgr.y (mgr.44103) 156 : cluster [DBG] pgmap v84: 161 pgs: 20 active+undersized, 13 active+undersized+degraded, 128 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-09T14:40:46.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:46 vm07 bash[55244]: cluster 2026-03-09T14:40:45.316283+0000 mon.a (mon.0) 397 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:46.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:46 vm07 bash[55244]: cluster 2026-03-09T14:40:45.316283+0000 mon.a (mon.0) 397 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:46.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:46 vm07 bash[56315]: cluster 2026-03-09T14:40:44.548090+0000 mgr.y (mgr.44103) 156 : cluster [DBG] pgmap v84: 161 pgs: 20 active+undersized, 13 active+undersized+degraded, 128 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-09T14:40:46.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:46 vm07 bash[56315]: cluster 2026-03-09T14:40:44.548090+0000 mgr.y (mgr.44103) 156 : cluster [DBG] pgmap v84: 161 pgs: 20 active+undersized, 13 active+undersized+degraded, 128 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-09T14:40:46.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:46 vm07 bash[56315]: cluster 2026-03-09T14:40:45.316283+0000 mon.a (mon.0) 397 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:46.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:46 vm07 bash[56315]: cluster 2026-03-09T14:40:45.316283+0000 mon.a (mon.0) 397 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:47.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:46 vm11 bash[41290]: ts=2026-03-09T14:40:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:47.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:47 vm07 bash[55244]: cluster 2026-03-09T14:40:47.258771+0000 mon.a (mon.0) 398 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded) 2026-03-09T14:40:47.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:47 vm07 bash[55244]: cluster 2026-03-09T14:40:47.258771+0000 mon.a (mon.0) 398 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded) 2026-03-09T14:40:47.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:47 vm07 bash[55244]: cluster 2026-03-09T14:40:47.258791+0000 mon.a (mon.0) 399 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:47.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:47 vm07 bash[55244]: cluster 2026-03-09T14:40:47.258791+0000 mon.a (mon.0) 399 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:47.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:47 vm07 bash[56315]: cluster 2026-03-09T14:40:47.258771+0000 mon.a (mon.0) 398 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded) 2026-03-09T14:40:47.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:47 vm07 bash[56315]: cluster 2026-03-09T14:40:47.258771+0000 mon.a (mon.0) 398 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded) 2026-03-09T14:40:47.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:47 vm07 bash[56315]: cluster 2026-03-09T14:40:47.258791+0000 mon.a (mon.0) 399 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:47.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:47 vm07 bash[56315]: cluster 2026-03-09T14:40:47.258791+0000 mon.a (mon.0) 399 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:47.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:47 vm11 bash[43577]: cluster 2026-03-09T14:40:47.258771+0000 mon.a (mon.0) 398 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded) 2026-03-09T14:40:47.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:47 vm11 bash[43577]: cluster 2026-03-09T14:40:47.258771+0000 mon.a (mon.0) 398 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 13 pgs degraded) 2026-03-09T14:40:47.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:47 vm11 bash[43577]: cluster 2026-03-09T14:40:47.258791+0000 mon.a (mon.0) 399 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:47.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:47 vm11 bash[43577]: cluster 2026-03-09T14:40:47.258791+0000 mon.a (mon.0) 399 : cluster [INF] Cluster is now healthy 2026-03-09T14:40:48.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:48 vm07 bash[56315]: cluster 2026-03-09T14:40:46.548436+0000 mgr.y (mgr.44103) 157 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T14:40:48.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:48 vm07 bash[56315]: cluster 2026-03-09T14:40:46.548436+0000 mgr.y (mgr.44103) 157 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T14:40:48.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:48 vm07 bash[55244]: cluster 2026-03-09T14:40:46.548436+0000 mgr.y (mgr.44103) 157 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T14:40:48.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:48 vm07 bash[55244]: cluster 2026-03-09T14:40:46.548436+0000 mgr.y (mgr.44103) 157 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T14:40:48.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:48 vm11 bash[43577]: cluster 2026-03-09T14:40:46.548436+0000 mgr.y (mgr.44103) 157 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T14:40:48.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:48 vm11 bash[43577]: cluster 2026-03-09T14:40:46.548436+0000 mgr.y (mgr.44103) 157 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-09T14:40:49.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:49 vm07 bash[55244]: audit 2026-03-09T14:40:47.533889+0000 mgr.y (mgr.44103) 158 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:49.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:49 vm07 bash[55244]: audit 2026-03-09T14:40:47.533889+0000 mgr.y (mgr.44103) 158 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:49.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:49 vm07 bash[56315]: audit 2026-03-09T14:40:47.533889+0000 mgr.y (mgr.44103) 158 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:49.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:49 vm07 bash[56315]: audit 2026-03-09T14:40:47.533889+0000 mgr.y (mgr.44103) 158 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:49.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:49 vm11 bash[43577]: audit 2026-03-09T14:40:47.533889+0000 mgr.y (mgr.44103) 158 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:49.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:49 vm11 bash[43577]: audit 2026-03-09T14:40:47.533889+0000 mgr.y (mgr.44103) 158 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:50.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:50 vm07 bash[55244]: cluster 2026-03-09T14:40:48.548789+0000 mgr.y (mgr.44103) 159 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 774 B/s rd, 0 op/s 2026-03-09T14:40:50.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:50 vm07 bash[55244]: cluster 2026-03-09T14:40:48.548789+0000 mgr.y (mgr.44103) 159 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 774 B/s rd, 0 op/s 2026-03-09T14:40:50.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:50 vm07 bash[56315]: cluster 2026-03-09T14:40:48.548789+0000 mgr.y (mgr.44103) 159 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 774 B/s rd, 0 op/s 2026-03-09T14:40:50.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:50 vm07 bash[56315]: cluster 2026-03-09T14:40:48.548789+0000 mgr.y (mgr.44103) 159 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 774 B/s rd, 0 op/s 2026-03-09T14:40:50.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:50 vm11 bash[43577]: cluster 2026-03-09T14:40:48.548789+0000 mgr.y (mgr.44103) 159 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 774 B/s rd, 0 op/s 2026-03-09T14:40:50.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:50 vm11 bash[43577]: cluster 2026-03-09T14:40:48.548789+0000 mgr.y (mgr.44103) 159 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 774 B/s rd, 0 op/s 2026-03-09T14:40:52.124 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: cluster 2026-03-09T14:40:50.549281+0000 mgr.y (mgr.44103) 160 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: cluster 2026-03-09T14:40:50.549281+0000 mgr.y (mgr.44103) 160 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.065341+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.065341+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.075195+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.075195+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.076520+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.076520+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.077309+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.077309+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.085243+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.085243+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.131406+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.131406+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.132938+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.132938+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.134138+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.134138+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.135085+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.135085+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.136305+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.136305+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.136483+0000 mgr.y (mgr.44103) 161 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.136483+0000 mgr.y (mgr.44103) 161 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: cephadm 2026-03-09T14:40:51.137239+0000 mgr.y (mgr.44103) 162 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: cephadm 2026-03-09T14:40:51.137239+0000 mgr.y (mgr.44103) 162 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.576221+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.576221+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.581061+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.581061+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.581573+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.125 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 bash[43577]: audit 2026-03-09T14:40:51.581573+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: cluster 2026-03-09T14:40:50.549281+0000 mgr.y (mgr.44103) 160 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: cluster 2026-03-09T14:40:50.549281+0000 mgr.y (mgr.44103) 160 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.065341+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.065341+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.075195+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.075195+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.076520+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.076520+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.077309+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.077309+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.085243+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.085243+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.131406+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:52.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.131406+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.132938+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.132938+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.134138+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.134138+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.135085+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.135085+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.136305+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.136305+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.136483+0000 mgr.y (mgr.44103) 161 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.136483+0000 mgr.y (mgr.44103) 161 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: cephadm 2026-03-09T14:40:51.137239+0000 mgr.y (mgr.44103) 162 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: cephadm 2026-03-09T14:40:51.137239+0000 mgr.y (mgr.44103) 162 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.576221+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.576221+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.581061+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.581061+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.581573+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:52 vm07 bash[56315]: audit 2026-03-09T14:40:51.581573+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: cluster 2026-03-09T14:40:50.549281+0000 mgr.y (mgr.44103) 160 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: cluster 2026-03-09T14:40:50.549281+0000 mgr.y (mgr.44103) 160 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.065341+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.065341+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.075195+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.075195+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.076520+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.076520+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.077309+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.077309+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.085243+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.085243+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.131406+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.131406+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.132938+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.132938+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.134138+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.134138+0000 mon.a (mon.0) 407 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.135085+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.135085+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.136305+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.136305+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.136483+0000 mgr.y (mgr.44103) 161 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.136483+0000 mgr.y (mgr.44103) 161 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: cephadm 2026-03-09T14:40:51.137239+0000 mgr.y (mgr.44103) 162 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: cephadm 2026-03-09T14:40:51.137239+0000 mgr.y (mgr.44103) 162 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.576221+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.576221+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.581061+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.581061+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.581573+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:52 vm07 bash[55244]: audit 2026-03-09T14:40:51.581573+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:40:52.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:40:52 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:52.753 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:52 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:52.753 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:40:52 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:52.753 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:40:52 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:52.753 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:40:52 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:52.753 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:52 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:52.753 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:52 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:52.753 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:52 vm11 systemd[1]: Stopping Ceph osd.5 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:40:52.753 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:52 vm11 bash[23966]: debug 2026-03-09T14:40:52.631+0000 7f8d1028a700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:40:52.753 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:52 vm11 bash[23966]: debug 2026-03-09T14:40:52.631+0000 7f8d1028a700 -1 osd.5 117 *** Got signal Terminated *** 2026-03-09T14:40:52.753 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:52 vm11 bash[23966]: debug 2026-03-09T14:40:52.631+0000 7f8d1028a700 -1 osd.5 117 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:40:52.753 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:40:52 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:52.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:52 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: cephadm 2026-03-09T14:40:51.571509+0000 mgr.y (mgr.44103) 163 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: cephadm 2026-03-09T14:40:51.571509+0000 mgr.y (mgr.44103) 163 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: cephadm 2026-03-09T14:40:51.582966+0000 mgr.y (mgr.44103) 164 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: cephadm 2026-03-09T14:40:51.582966+0000 mgr.y (mgr.44103) 164 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: audit 2026-03-09T14:40:52.582271+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: audit 2026-03-09T14:40:52.582271+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: audit 2026-03-09T14:40:52.583698+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: audit 2026-03-09T14:40:52.583698+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: audit 2026-03-09T14:40:52.623625+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: audit 2026-03-09T14:40:52.623625+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: cluster 2026-03-09T14:40:52.638911+0000 mon.a (mon.0) 416 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:53 vm07 bash[55244]: cluster 2026-03-09T14:40:52.638911+0000 mon.a (mon.0) 416 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T14:40:53.654 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:40:53 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:40:53] "GET /metrics HTTP/1.1" 200 38078 "" "Prometheus/2.51.0" 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: cephadm 2026-03-09T14:40:51.571509+0000 mgr.y (mgr.44103) 163 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: cephadm 2026-03-09T14:40:51.571509+0000 mgr.y (mgr.44103) 163 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: cephadm 2026-03-09T14:40:51.582966+0000 mgr.y (mgr.44103) 164 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: cephadm 2026-03-09T14:40:51.582966+0000 mgr.y (mgr.44103) 164 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: audit 2026-03-09T14:40:52.582271+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: audit 2026-03-09T14:40:52.582271+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: audit 2026-03-09T14:40:52.583698+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: audit 2026-03-09T14:40:52.583698+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: audit 2026-03-09T14:40:52.623625+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: audit 2026-03-09T14:40:52.623625+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: cluster 2026-03-09T14:40:52.638911+0000 mon.a (mon.0) 416 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T14:40:53.655 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:53 vm07 bash[56315]: cluster 2026-03-09T14:40:52.638911+0000 mon.a (mon.0) 416 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T14:40:53.679 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: cephadm 2026-03-09T14:40:51.571509+0000 mgr.y (mgr.44103) 163 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T14:40:53.679 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: cephadm 2026-03-09T14:40:51.571509+0000 mgr.y (mgr.44103) 163 : cephadm [INF] Upgrade: Updating osd.5 2026-03-09T14:40:53.680 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: cephadm 2026-03-09T14:40:51.582966+0000 mgr.y (mgr.44103) 164 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-09T14:40:53.680 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: cephadm 2026-03-09T14:40:51.582966+0000 mgr.y (mgr.44103) 164 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-09T14:40:53.680 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: audit 2026-03-09T14:40:52.582271+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.680 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: audit 2026-03-09T14:40:52.582271+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.680 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: audit 2026-03-09T14:40:52.583698+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:53.680 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: audit 2026-03-09T14:40:52.583698+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:40:53.680 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: audit 2026-03-09T14:40:52.623625+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.680 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: audit 2026-03-09T14:40:52.623625+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:53.680 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: cluster 2026-03-09T14:40:52.638911+0000 mon.a (mon.0) 416 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T14:40:53.680 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:53 vm11 bash[43577]: cluster 2026-03-09T14:40:52.638911+0000 mon.a (mon.0) 416 : cluster [INF] osd.5 marked itself down and dead 2026-03-09T14:40:53.956 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:53 vm11 bash[47279]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-5 2026-03-09T14:40:54.252 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:40:54 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:40:54 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:54 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:54 vm11 bash[41290]: ts=2026-03-09T14:40:54.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.5\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.5\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.5\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:54.253 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:40:54 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:40:54 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:40:54 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:53 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.5.service: Deactivated successfully. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:53 vm11 systemd[1]: Stopped Ceph osd.5 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:54 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:54 vm11 systemd[1]: Started Ceph osd.5 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:40:54.253 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:40:54 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:54 vm07 bash[55244]: cluster 2026-03-09T14:40:52.549664+0000 mgr.y (mgr.44103) 165 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:54 vm07 bash[55244]: cluster 2026-03-09T14:40:52.549664+0000 mgr.y (mgr.44103) 165 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:54 vm07 bash[55244]: cluster 2026-03-09T14:40:53.622485+0000 mon.a (mon.0) 417 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:54 vm07 bash[55244]: cluster 2026-03-09T14:40:53.622485+0000 mon.a (mon.0) 417 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:54 vm07 bash[55244]: cluster 2026-03-09T14:40:53.629263+0000 mon.a (mon.0) 418 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:54 vm07 bash[55244]: cluster 2026-03-09T14:40:53.629263+0000 mon.a (mon.0) 418 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:54 vm07 bash[55244]: audit 2026-03-09T14:40:54.224652+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:54 vm07 bash[55244]: audit 2026-03-09T14:40:54.224652+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:54 vm07 bash[55244]: audit 2026-03-09T14:40:54.231416+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:54 vm07 bash[55244]: audit 2026-03-09T14:40:54.231416+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:54 vm07 bash[56315]: cluster 2026-03-09T14:40:52.549664+0000 mgr.y (mgr.44103) 165 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:54 vm07 bash[56315]: cluster 2026-03-09T14:40:52.549664+0000 mgr.y (mgr.44103) 165 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:54 vm07 bash[56315]: cluster 2026-03-09T14:40:53.622485+0000 mon.a (mon.0) 417 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:54 vm07 bash[56315]: cluster 2026-03-09T14:40:53.622485+0000 mon.a (mon.0) 417 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:54 vm07 bash[56315]: cluster 2026-03-09T14:40:53.629263+0000 mon.a (mon.0) 418 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:54 vm07 bash[56315]: cluster 2026-03-09T14:40:53.629263+0000 mon.a (mon.0) 418 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:54 vm07 bash[56315]: audit 2026-03-09T14:40:54.224652+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:54 vm07 bash[56315]: audit 2026-03-09T14:40:54.224652+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:54 vm07 bash[56315]: audit 2026-03-09T14:40:54.231416+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:54 vm07 bash[56315]: audit 2026-03-09T14:40:54.231416+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.753 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:54 vm11 bash[47489]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:54.753 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:54 vm11 bash[47489]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:54.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 bash[43577]: cluster 2026-03-09T14:40:52.549664+0000 mgr.y (mgr.44103) 165 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:54.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 bash[43577]: cluster 2026-03-09T14:40:52.549664+0000 mgr.y (mgr.44103) 165 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:54.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 bash[43577]: cluster 2026-03-09T14:40:53.622485+0000 mon.a (mon.0) 417 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:54.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 bash[43577]: cluster 2026-03-09T14:40:53.622485+0000 mon.a (mon.0) 417 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:40:54.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 bash[43577]: cluster 2026-03-09T14:40:53.629263+0000 mon.a (mon.0) 418 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T14:40:54.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 bash[43577]: cluster 2026-03-09T14:40:53.629263+0000 mon.a (mon.0) 418 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-09T14:40:54.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 bash[43577]: audit 2026-03-09T14:40:54.224652+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 bash[43577]: audit 2026-03-09T14:40:54.224652+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 bash[43577]: audit 2026-03-09T14:40:54.231416+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:54.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:54 vm11 bash[43577]: audit 2026-03-09T14:40:54.231416+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:40:55.503 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:55 vm11 bash[47489]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T14:40:55.503 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:55 vm11 bash[47489]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:55.503 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:55 vm11 bash[47489]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:40:55.503 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:55 vm11 bash[47489]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T14:40:55.503 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:55 vm11 bash[47489]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-6890b6d3-04e1-4427-8994-87bd041edf34/osd-block-104be397-ca1c-4a2d-ae2d-97efa37d095a --path /var/lib/ceph/osd/ceph-5 --no-mon-config 2026-03-09T14:40:55.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:55 vm07 bash[55244]: cluster 2026-03-09T14:40:54.549943+0000 mgr.y (mgr.44103) 166 : cluster [DBG] pgmap v90: 161 pgs: 22 stale+active+clean, 139 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:55.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:55 vm07 bash[55244]: cluster 2026-03-09T14:40:54.549943+0000 mgr.y (mgr.44103) 166 : cluster [DBG] pgmap v90: 161 pgs: 22 stale+active+clean, 139 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:55.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:55 vm07 bash[55244]: cluster 2026-03-09T14:40:54.647261+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T14:40:55.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:55 vm07 bash[55244]: cluster 2026-03-09T14:40:54.647261+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T14:40:55.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:55 vm07 bash[56315]: cluster 2026-03-09T14:40:54.549943+0000 mgr.y (mgr.44103) 166 : cluster [DBG] pgmap v90: 161 pgs: 22 stale+active+clean, 139 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:55.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:55 vm07 bash[56315]: cluster 2026-03-09T14:40:54.549943+0000 mgr.y (mgr.44103) 166 : cluster [DBG] pgmap v90: 161 pgs: 22 stale+active+clean, 139 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:55.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:55 vm07 bash[56315]: cluster 2026-03-09T14:40:54.647261+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T14:40:55.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:55 vm07 bash[56315]: cluster 2026-03-09T14:40:54.647261+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T14:40:56.003 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:55 vm11 bash[47489]: Running command: /usr/bin/ln -snf /dev/ceph-6890b6d3-04e1-4427-8994-87bd041edf34/osd-block-104be397-ca1c-4a2d-ae2d-97efa37d095a /var/lib/ceph/osd/ceph-5/block 2026-03-09T14:40:56.003 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:55 vm11 bash[47489]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block 2026-03-09T14:40:56.003 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:55 vm11 bash[47489]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-09T14:40:56.003 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:55 vm11 bash[47489]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-09T14:40:56.003 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:55 vm11 bash[47489]: --> ceph-volume lvm activate successful for osd ID: 5 2026-03-09T14:40:56.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:55 vm11 bash[43577]: cluster 2026-03-09T14:40:54.549943+0000 mgr.y (mgr.44103) 166 : cluster [DBG] pgmap v90: 161 pgs: 22 stale+active+clean, 139 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:56.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:55 vm11 bash[43577]: cluster 2026-03-09T14:40:54.549943+0000 mgr.y (mgr.44103) 166 : cluster [DBG] pgmap v90: 161 pgs: 22 stale+active+clean, 139 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:40:56.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:55 vm11 bash[43577]: cluster 2026-03-09T14:40:54.647261+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T14:40:56.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:55 vm11 bash[43577]: cluster 2026-03-09T14:40:54.647261+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-09T14:40:56.950 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:56 vm11 bash[47833]: debug 2026-03-09T14:40:56.699+0000 7f6e90bda740 -1 Falling back to public interface 2026-03-09T14:40:56.950 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:56 vm11 bash[43577]: cluster 2026-03-09T14:40:56.630089+0000 mon.a (mon.0) 422 : cluster [WRN] Health check failed: Degraded data redundancy: 25/627 objects degraded (3.987%), 8 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:56.950 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:56 vm11 bash[43577]: cluster 2026-03-09T14:40:56.630089+0000 mon.a (mon.0) 422 : cluster [WRN] Health check failed: Degraded data redundancy: 25/627 objects degraded (3.987%), 8 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:57.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:56 vm07 bash[55244]: cluster 2026-03-09T14:40:56.630089+0000 mon.a (mon.0) 422 : cluster [WRN] Health check failed: Degraded data redundancy: 25/627 objects degraded (3.987%), 8 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:57.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:56 vm07 bash[55244]: cluster 2026-03-09T14:40:56.630089+0000 mon.a (mon.0) 422 : cluster [WRN] Health check failed: Degraded data redundancy: 25/627 objects degraded (3.987%), 8 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:57.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:56 vm07 bash[56315]: cluster 2026-03-09T14:40:56.630089+0000 mon.a (mon.0) 422 : cluster [WRN] Health check failed: Degraded data redundancy: 25/627 objects degraded (3.987%), 8 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:57.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:56 vm07 bash[56315]: cluster 2026-03-09T14:40:56.630089+0000 mon.a (mon.0) 422 : cluster [WRN] Health check failed: Degraded data redundancy: 25/627 objects degraded (3.987%), 8 pgs degraded (PG_DEGRADED) 2026-03-09T14:40:57.252 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:40:56 vm11 bash[41290]: ts=2026-03-09T14:40:56.954Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:40:57.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:57 vm07 bash[55244]: cluster 2026-03-09T14:40:56.550289+0000 mgr.y (mgr.44103) 167 : cluster [DBG] pgmap v92: 161 pgs: 13 active+undersized, 16 stale+active+clean, 8 active+undersized+degraded, 124 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 25/627 objects degraded (3.987%) 2026-03-09T14:40:57.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:57 vm07 bash[55244]: cluster 2026-03-09T14:40:56.550289+0000 mgr.y (mgr.44103) 167 : cluster [DBG] pgmap v92: 161 pgs: 13 active+undersized, 16 stale+active+clean, 8 active+undersized+degraded, 124 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 25/627 objects degraded (3.987%) 2026-03-09T14:40:57.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:57 vm07 bash[56315]: cluster 2026-03-09T14:40:56.550289+0000 mgr.y (mgr.44103) 167 : cluster [DBG] pgmap v92: 161 pgs: 13 active+undersized, 16 stale+active+clean, 8 active+undersized+degraded, 124 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 25/627 objects degraded (3.987%) 2026-03-09T14:40:57.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:57 vm07 bash[56315]: cluster 2026-03-09T14:40:56.550289+0000 mgr.y (mgr.44103) 167 : cluster [DBG] pgmap v92: 161 pgs: 13 active+undersized, 16 stale+active+clean, 8 active+undersized+degraded, 124 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 25/627 objects degraded (3.987%) 2026-03-09T14:40:58.003 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:57 vm11 bash[47833]: debug 2026-03-09T14:40:57.691+0000 7f6e90bda740 -1 osd.5 0 read_superblock omap replica is missing. 2026-03-09T14:40:58.003 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:57 vm11 bash[47833]: debug 2026-03-09T14:40:57.703+0000 7f6e90bda740 -1 osd.5 117 log_to_monitors true 2026-03-09T14:40:58.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:57 vm11 bash[43577]: cluster 2026-03-09T14:40:56.550289+0000 mgr.y (mgr.44103) 167 : cluster [DBG] pgmap v92: 161 pgs: 13 active+undersized, 16 stale+active+clean, 8 active+undersized+degraded, 124 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 25/627 objects degraded (3.987%) 2026-03-09T14:40:58.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:57 vm11 bash[43577]: cluster 2026-03-09T14:40:56.550289+0000 mgr.y (mgr.44103) 167 : cluster [DBG] pgmap v92: 161 pgs: 13 active+undersized, 16 stale+active+clean, 8 active+undersized+degraded, 124 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s; 25/627 objects degraded (3.987%) 2026-03-09T14:40:59.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:58 vm11 bash[43577]: audit 2026-03-09T14:40:57.541653+0000 mgr.y (mgr.44103) 168 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:59.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:58 vm11 bash[43577]: audit 2026-03-09T14:40:57.541653+0000 mgr.y (mgr.44103) 168 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:59.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:58 vm11 bash[43577]: audit 2026-03-09T14:40:57.711753+0000 mon.b (mon.2) 5 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:58 vm11 bash[43577]: audit 2026-03-09T14:40:57.711753+0000 mon.b (mon.2) 5 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:58 vm11 bash[43577]: audit 2026-03-09T14:40:57.716986+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:58 vm11 bash[43577]: audit 2026-03-09T14:40:57.716986+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:58 vm07 bash[55244]: audit 2026-03-09T14:40:57.541653+0000 mgr.y (mgr.44103) 168 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:58 vm07 bash[55244]: audit 2026-03-09T14:40:57.541653+0000 mgr.y (mgr.44103) 168 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:58 vm07 bash[55244]: audit 2026-03-09T14:40:57.711753+0000 mon.b (mon.2) 5 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:58 vm07 bash[55244]: audit 2026-03-09T14:40:57.711753+0000 mon.b (mon.2) 5 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:58 vm07 bash[55244]: audit 2026-03-09T14:40:57.716986+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:58 vm07 bash[55244]: audit 2026-03-09T14:40:57.716986+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:58 vm07 bash[56315]: audit 2026-03-09T14:40:57.541653+0000 mgr.y (mgr.44103) 168 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:58 vm07 bash[56315]: audit 2026-03-09T14:40:57.541653+0000 mgr.y (mgr.44103) 168 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:58 vm07 bash[56315]: audit 2026-03-09T14:40:57.711753+0000 mon.b (mon.2) 5 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:58 vm07 bash[56315]: audit 2026-03-09T14:40:57.711753+0000 mon.b (mon.2) 5 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:58 vm07 bash[56315]: audit 2026-03-09T14:40:57.716986+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:58 vm07 bash[56315]: audit 2026-03-09T14:40:57.716986+0000 mon.a (mon.0) 423 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:40:59.809 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:59 vm07 bash[55244]: cluster 2026-03-09T14:40:58.550672+0000 mgr.y (mgr.44103) 169 : cluster [DBG] pgmap v93: 161 pgs: 27 active+undersized, 5 stale+active+clean, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 62/627 objects degraded (9.888%) 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:59 vm07 bash[55244]: cluster 2026-03-09T14:40:58.550672+0000 mgr.y (mgr.44103) 169 : cluster [DBG] pgmap v93: 161 pgs: 27 active+undersized, 5 stale+active+clean, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 62/627 objects degraded (9.888%) 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:59 vm07 bash[55244]: audit 2026-03-09T14:40:58.694087+0000 mon.a (mon.0) 424 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:59 vm07 bash[55244]: audit 2026-03-09T14:40:58.694087+0000 mon.a (mon.0) 424 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:59 vm07 bash[55244]: audit 2026-03-09T14:40:58.697196+0000 mon.b (mon.2) 6 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:59 vm07 bash[55244]: audit 2026-03-09T14:40:58.697196+0000 mon.b (mon.2) 6 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:59 vm07 bash[55244]: cluster 2026-03-09T14:40:58.702297+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:59 vm07 bash[55244]: cluster 2026-03-09T14:40:58.702297+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:59 vm07 bash[55244]: audit 2026-03-09T14:40:58.704280+0000 mon.a (mon.0) 426 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:40:59 vm07 bash[55244]: audit 2026-03-09T14:40:58.704280+0000 mon.a (mon.0) 426 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:59 vm07 bash[56315]: cluster 2026-03-09T14:40:58.550672+0000 mgr.y (mgr.44103) 169 : cluster [DBG] pgmap v93: 161 pgs: 27 active+undersized, 5 stale+active+clean, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 62/627 objects degraded (9.888%) 2026-03-09T14:41:00.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:59 vm07 bash[56315]: cluster 2026-03-09T14:40:58.550672+0000 mgr.y (mgr.44103) 169 : cluster [DBG] pgmap v93: 161 pgs: 27 active+undersized, 5 stale+active+clean, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 62/627 objects degraded (9.888%) 2026-03-09T14:41:00.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:59 vm07 bash[56315]: audit 2026-03-09T14:40:58.694087+0000 mon.a (mon.0) 424 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:41:00.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:59 vm07 bash[56315]: audit 2026-03-09T14:40:58.694087+0000 mon.a (mon.0) 424 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:41:00.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:59 vm07 bash[56315]: audit 2026-03-09T14:40:58.697196+0000 mon.b (mon.2) 6 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:59 vm07 bash[56315]: audit 2026-03-09T14:40:58.697196+0000 mon.b (mon.2) 6 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:59 vm07 bash[56315]: cluster 2026-03-09T14:40:58.702297+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-09T14:41:00.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:59 vm07 bash[56315]: cluster 2026-03-09T14:40:58.702297+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-09T14:41:00.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:59 vm07 bash[56315]: audit 2026-03-09T14:40:58.704280+0000 mon.a (mon.0) 426 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:40:59 vm07 bash[56315]: audit 2026-03-09T14:40:58.704280+0000 mon.a (mon.0) 426 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.205 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:41:00.206 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (3m) 34s ago 8m 13.6M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (3m) 16s ago 8m 36.9M - dad864ee21e9 614f6a00be7a 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (2m) 34s ago 7m 43.0M - 3.5 e1d6a67b021e e3b30dab288c 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 running (2m) 16s ago 10m 465M - 19.2.3-678-ge911bdeb 654f31e6858e d35dddd392d1 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (3m) 34s ago 11m 528M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (2m) 34s ago 11m 44.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bcdaa5dfc948 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (112s) 16s ago 11m 37.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1caba9bf8a13 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (2m) 34s ago 11m 42.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ff7dfe3a6c7c 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (3m) 34s ago 8m 7591k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (3m) 16s ago 8m 7591k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (55s) 34s ago 10m 45.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 24632814894d 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (39s) 34s ago 10m 31.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1f773b5d0f68 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (71s) 34s ago 10m 65.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7d943c2f091c 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (89s) 34s ago 9m 48.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7c234b83449a 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (21s) 16s ago 9m 22.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 811379ab4ba5 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 starting - - - 4096M 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (9m) 16s ago 9m 52.9M 4096M 17.2.0 e1d6a67b021e 52e28e90b585 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (8m) 16s ago 8m 54.8M 4096M 17.2.0 e1d6a67b021e abb74346bf4d 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (2m) 16s ago 8m 40.3M - 2.51.0 1d3b7f56885b e88f0339687c 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (7m) 34s ago 7m 85.8M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (7m) 16s ago 7m 85.1M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (7m) 34s ago 7m 85.3M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:41:00.207 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (7m) 16s ago 7m 85.2M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:41:00.254 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:40:59 vm11 bash[47833]: debug 2026-03-09T14:40:59.923+0000 7f6e88184640 -1 osd.5 117 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:41:00.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:59 vm11 bash[43577]: cluster 2026-03-09T14:40:58.550672+0000 mgr.y (mgr.44103) 169 : cluster [DBG] pgmap v93: 161 pgs: 27 active+undersized, 5 stale+active+clean, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 62/627 objects degraded (9.888%) 2026-03-09T14:41:00.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:59 vm11 bash[43577]: cluster 2026-03-09T14:40:58.550672+0000 mgr.y (mgr.44103) 169 : cluster [DBG] pgmap v93: 161 pgs: 27 active+undersized, 5 stale+active+clean, 20 active+undersized+degraded, 109 active+clean; 457 KiB data, 202 MiB used, 160 GiB / 160 GiB avail; 62/627 objects degraded (9.888%) 2026-03-09T14:41:00.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:59 vm11 bash[43577]: audit 2026-03-09T14:40:58.694087+0000 mon.a (mon.0) 424 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:41:00.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:59 vm11 bash[43577]: audit 2026-03-09T14:40:58.694087+0000 mon.a (mon.0) 424 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:41:00.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:59 vm11 bash[43577]: audit 2026-03-09T14:40:58.697196+0000 mon.b (mon.2) 6 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:59 vm11 bash[43577]: audit 2026-03-09T14:40:58.697196+0000 mon.b (mon.2) 6 : audit [INF] from='osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:59 vm11 bash[43577]: cluster 2026-03-09T14:40:58.702297+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-09T14:41:00.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:59 vm11 bash[43577]: cluster 2026-03-09T14:40:58.702297+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-09T14:41:00.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:59 vm11 bash[43577]: audit 2026-03-09T14:40:58.704280+0000 mon.a (mon.0) 426 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:40:59 vm11 bash[43577]: audit 2026-03-09T14:40:58.704280+0000 mon.a (mon.0) 426 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2, 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 6, 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 10 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:41:00.456 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout: "mgr", 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout: "mon" 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "10/23 daemons upgraded", 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:41:00.682 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_WARN Degraded data redundancy: 73/627 objects degraded (11.643%), 22 pgs degraded 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 73/627 objects degraded (11.643%), 22 pgs degraded 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.2 is active+undersized+degraded, acting [1,6] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.c is active+undersized+degraded, acting [2,0] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.12 is active+undersized+degraded, acting [3,7] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.14 is active+undersized+degraded, acting [6,3] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.17 is active+undersized+degraded, acting [6,2] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.2 is active+undersized+degraded, acting [3,6] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.4 is active+undersized+degraded, acting [1,2] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.5 is active+undersized+degraded, acting [3,2] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.c is active+undersized+degraded, acting [3,6] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.d is active+undersized+degraded, acting [7,6] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.10 is active+undersized+degraded, acting [6,0] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.16 is active+undersized+degraded, acting [7,1] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.17 is active+undersized+degraded, acting [0,3] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1c is active+undersized+degraded, acting [4,1] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1d is active+undersized+degraded, acting [4,6] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.1f is active+undersized+degraded, acting [0,2] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 4.3 is active+undersized+degraded, acting [0,7] 2026-03-09T14:41:00.979 INFO:teuthology.orchestra.run.vm07.stdout: pg 4.d is active+undersized+degraded, acting [4,2] 2026-03-09T14:41:00.980 INFO:teuthology.orchestra.run.vm07.stdout: pg 4.15 is active+undersized+degraded, acting [7,3] 2026-03-09T14:41:00.980 INFO:teuthology.orchestra.run.vm07.stdout: pg 4.1f is active+undersized+degraded, acting [6,1] 2026-03-09T14:41:00.980 INFO:teuthology.orchestra.run.vm07.stdout: pg 6.c is active+undersized+degraded, acting [3,6] 2026-03-09T14:41:00.980 INFO:teuthology.orchestra.run.vm07.stdout: pg 6.1a is active+undersized+degraded, acting [4,1] 2026-03-09T14:41:01.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:00 vm11 bash[43577]: audit 2026-03-09T14:40:59.800082+0000 mgr.y (mgr.44103) 170 : audit [DBG] from='client.44277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:00 vm11 bash[43577]: audit 2026-03-09T14:40:59.800082+0000 mgr.y (mgr.44103) 170 : audit [DBG] from='client.44277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:00 vm11 bash[43577]: cluster 2026-03-09T14:40:59.926136+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 26435.037697 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:01.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:00 vm11 bash[43577]: cluster 2026-03-09T14:40:59.926136+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 26435.037697 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:01.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:00 vm11 bash[43577]: audit 2026-03-09T14:41:00.010318+0000 mgr.y (mgr.44103) 171 : audit [DBG] from='client.34258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:00 vm11 bash[43577]: audit 2026-03-09T14:41:00.010318+0000 mgr.y (mgr.44103) 171 : audit [DBG] from='client.34258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:00 vm11 bash[43577]: audit 2026-03-09T14:41:00.210433+0000 mgr.y (mgr.44103) 172 : audit [DBG] from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:00 vm11 bash[43577]: audit 2026-03-09T14:41:00.210433+0000 mgr.y (mgr.44103) 172 : audit [DBG] from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:00 vm11 bash[43577]: audit 2026-03-09T14:41:00.464904+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.107:0/4047110856' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:01.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:00 vm11 bash[43577]: audit 2026-03-09T14:41:00.464904+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.107:0/4047110856' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:00 vm07 bash[55244]: audit 2026-03-09T14:40:59.800082+0000 mgr.y (mgr.44103) 170 : audit [DBG] from='client.44277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:00 vm07 bash[55244]: audit 2026-03-09T14:40:59.800082+0000 mgr.y (mgr.44103) 170 : audit [DBG] from='client.44277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:00 vm07 bash[55244]: cluster 2026-03-09T14:40:59.926136+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 26435.037697 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:00 vm07 bash[55244]: cluster 2026-03-09T14:40:59.926136+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 26435.037697 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:00 vm07 bash[55244]: audit 2026-03-09T14:41:00.010318+0000 mgr.y (mgr.44103) 171 : audit [DBG] from='client.34258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:00 vm07 bash[55244]: audit 2026-03-09T14:41:00.010318+0000 mgr.y (mgr.44103) 171 : audit [DBG] from='client.34258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:00 vm07 bash[55244]: audit 2026-03-09T14:41:00.210433+0000 mgr.y (mgr.44103) 172 : audit [DBG] from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:00 vm07 bash[55244]: audit 2026-03-09T14:41:00.210433+0000 mgr.y (mgr.44103) 172 : audit [DBG] from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:00 vm07 bash[55244]: audit 2026-03-09T14:41:00.464904+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.107:0/4047110856' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:00 vm07 bash[55244]: audit 2026-03-09T14:41:00.464904+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.107:0/4047110856' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:00 vm07 bash[56315]: audit 2026-03-09T14:40:59.800082+0000 mgr.y (mgr.44103) 170 : audit [DBG] from='client.44277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:00 vm07 bash[56315]: audit 2026-03-09T14:40:59.800082+0000 mgr.y (mgr.44103) 170 : audit [DBG] from='client.44277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:00 vm07 bash[56315]: cluster 2026-03-09T14:40:59.926136+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 26435.037697 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:00 vm07 bash[56315]: cluster 2026-03-09T14:40:59.926136+0000 osd.5 (osd.5) 1 : cluster [WRN] OSD bench result of 26435.037697 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.5. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:00 vm07 bash[56315]: audit 2026-03-09T14:41:00.010318+0000 mgr.y (mgr.44103) 171 : audit [DBG] from='client.34258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:00 vm07 bash[56315]: audit 2026-03-09T14:41:00.010318+0000 mgr.y (mgr.44103) 171 : audit [DBG] from='client.34258 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:00 vm07 bash[56315]: audit 2026-03-09T14:41:00.210433+0000 mgr.y (mgr.44103) 172 : audit [DBG] from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:00 vm07 bash[56315]: audit 2026-03-09T14:41:00.210433+0000 mgr.y (mgr.44103) 172 : audit [DBG] from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:00 vm07 bash[56315]: audit 2026-03-09T14:41:00.464904+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.107:0/4047110856' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:01.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:00 vm07 bash[56315]: audit 2026-03-09T14:41:00.464904+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.107:0/4047110856' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: cluster 2026-03-09T14:41:00.550987+0000 mgr.y (mgr.44103) 173 : cluster [DBG] pgmap v95: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 73/627 objects degraded (11.643%) 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: cluster 2026-03-09T14:41:00.550987+0000 mgr.y (mgr.44103) 173 : cluster [DBG] pgmap v95: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 73/627 objects degraded (11.643%) 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:00.690745+0000 mgr.y (mgr.44103) 174 : audit [DBG] from='client.44292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:00.690745+0000 mgr.y (mgr.44103) 174 : audit [DBG] from='client.44292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: cluster 2026-03-09T14:41:00.803876+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: cluster 2026-03-09T14:41:00.803876+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: cluster 2026-03-09T14:41:00.869771+0000 mon.a (mon.0) 428 : cluster [INF] osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055] boot 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: cluster 2026-03-09T14:41:00.869771+0000 mon.a (mon.0) 428 : cluster [INF] osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055] boot 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: cluster 2026-03-09T14:41:00.869891+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: cluster 2026-03-09T14:41:00.869891+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:00.870465+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:00.870465+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:00.988289+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2213358117' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:00.988289+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2213358117' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:01.141751+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:01.141751+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:01.146922+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:01.146922+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:01.723323+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:01.723323+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:01.727412+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:01 vm11 bash[43577]: audit 2026-03-09T14:41:01.727412+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: cluster 2026-03-09T14:41:00.550987+0000 mgr.y (mgr.44103) 173 : cluster [DBG] pgmap v95: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 73/627 objects degraded (11.643%) 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: cluster 2026-03-09T14:41:00.550987+0000 mgr.y (mgr.44103) 173 : cluster [DBG] pgmap v95: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 73/627 objects degraded (11.643%) 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:00.690745+0000 mgr.y (mgr.44103) 174 : audit [DBG] from='client.44292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:00.690745+0000 mgr.y (mgr.44103) 174 : audit [DBG] from='client.44292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: cluster 2026-03-09T14:41:00.803876+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: cluster 2026-03-09T14:41:00.803876+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: cluster 2026-03-09T14:41:00.869771+0000 mon.a (mon.0) 428 : cluster [INF] osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055] boot 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: cluster 2026-03-09T14:41:00.869771+0000 mon.a (mon.0) 428 : cluster [INF] osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055] boot 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: cluster 2026-03-09T14:41:00.869891+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: cluster 2026-03-09T14:41:00.869891+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:00.870465+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:00.870465+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:00.988289+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2213358117' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:00.988289+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2213358117' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:01.141751+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:01.141751+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:01.146922+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:01.146922+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:01.723323+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:01.723323+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:01.727412+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:01 vm07 bash[55244]: audit 2026-03-09T14:41:01.727412+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: cluster 2026-03-09T14:41:00.550987+0000 mgr.y (mgr.44103) 173 : cluster [DBG] pgmap v95: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 73/627 objects degraded (11.643%) 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: cluster 2026-03-09T14:41:00.550987+0000 mgr.y (mgr.44103) 173 : cluster [DBG] pgmap v95: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 73/627 objects degraded (11.643%) 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:00.690745+0000 mgr.y (mgr.44103) 174 : audit [DBG] from='client.44292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:00.690745+0000 mgr.y (mgr.44103) 174 : audit [DBG] from='client.44292 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: cluster 2026-03-09T14:41:00.803876+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: cluster 2026-03-09T14:41:00.803876+0000 mon.a (mon.0) 427 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: cluster 2026-03-09T14:41:00.869771+0000 mon.a (mon.0) 428 : cluster [INF] osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055] boot 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: cluster 2026-03-09T14:41:00.869771+0000 mon.a (mon.0) 428 : cluster [INF] osd.5 [v2:192.168.123.111:6808/3497708055,v1:192.168.123.111:6809/3497708055] boot 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: cluster 2026-03-09T14:41:00.869891+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: cluster 2026-03-09T14:41:00.869891+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:00.870465+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:00.870465+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:00.988289+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2213358117' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:00.988289+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2213358117' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:01.141751+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:01.141751+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:01.146922+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:01.146922+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:01.723323+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:01.723323+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:01.727412+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:02.405 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:01 vm07 bash[56315]: audit 2026-03-09T14:41:01.727412+0000 mon.a (mon.0) 434 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:03.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:02 vm11 bash[43577]: cluster 2026-03-09T14:41:01.947835+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T14:41:03.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:02 vm11 bash[43577]: cluster 2026-03-09T14:41:01.947835+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T14:41:03.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:02 vm11 bash[43577]: cluster 2026-03-09T14:41:02.967338+0000 mon.a (mon.0) 436 : cluster [WRN] Health check update: Degraded data redundancy: 73/627 objects degraded (11.643%), 22 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:03.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:02 vm11 bash[43577]: cluster 2026-03-09T14:41:02.967338+0000 mon.a (mon.0) 436 : cluster [WRN] Health check update: Degraded data redundancy: 73/627 objects degraded (11.643%), 22 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:03.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:03 vm07 bash[55244]: cluster 2026-03-09T14:41:01.947835+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T14:41:03.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:03 vm07 bash[55244]: cluster 2026-03-09T14:41:01.947835+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T14:41:03.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:03 vm07 bash[55244]: cluster 2026-03-09T14:41:02.967338+0000 mon.a (mon.0) 436 : cluster [WRN] Health check update: Degraded data redundancy: 73/627 objects degraded (11.643%), 22 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:03.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:03 vm07 bash[55244]: cluster 2026-03-09T14:41:02.967338+0000 mon.a (mon.0) 436 : cluster [WRN] Health check update: Degraded data redundancy: 73/627 objects degraded (11.643%), 22 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:03.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:03 vm07 bash[56315]: cluster 2026-03-09T14:41:01.947835+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T14:41:03.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:03 vm07 bash[56315]: cluster 2026-03-09T14:41:01.947835+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-09T14:41:03.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:03 vm07 bash[56315]: cluster 2026-03-09T14:41:02.967338+0000 mon.a (mon.0) 436 : cluster [WRN] Health check update: Degraded data redundancy: 73/627 objects degraded (11.643%), 22 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:03.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:03 vm07 bash[56315]: cluster 2026-03-09T14:41:02.967338+0000 mon.a (mon.0) 436 : cluster [WRN] Health check update: Degraded data redundancy: 73/627 objects degraded (11.643%), 22 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:03.904 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:41:03 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:41:03] "GET /metrics HTTP/1.1" 200 38078 "" "Prometheus/2.51.0" 2026-03-09T14:41:04.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:04 vm07 bash[56315]: cluster 2026-03-09T14:41:02.551429+0000 mgr.y (mgr.44103) 175 : cluster [DBG] pgmap v98: 161 pgs: 5 peering, 36 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 64/627 objects degraded (10.207%) 2026-03-09T14:41:04.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:04 vm07 bash[56315]: cluster 2026-03-09T14:41:02.551429+0000 mgr.y (mgr.44103) 175 : cluster [DBG] pgmap v98: 161 pgs: 5 peering, 36 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 64/627 objects degraded (10.207%) 2026-03-09T14:41:04.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:04 vm07 bash[55244]: cluster 2026-03-09T14:41:02.551429+0000 mgr.y (mgr.44103) 175 : cluster [DBG] pgmap v98: 161 pgs: 5 peering, 36 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 64/627 objects degraded (10.207%) 2026-03-09T14:41:04.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:04 vm07 bash[55244]: cluster 2026-03-09T14:41:02.551429+0000 mgr.y (mgr.44103) 175 : cluster [DBG] pgmap v98: 161 pgs: 5 peering, 36 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 64/627 objects degraded (10.207%) 2026-03-09T14:41:04.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:04 vm11 bash[41290]: ts=2026-03-09T14:41:04.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.5\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.5\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.5\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:04.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:04 vm11 bash[43577]: cluster 2026-03-09T14:41:02.551429+0000 mgr.y (mgr.44103) 175 : cluster [DBG] pgmap v98: 161 pgs: 5 peering, 36 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 64/627 objects degraded (10.207%) 2026-03-09T14:41:04.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:04 vm11 bash[43577]: cluster 2026-03-09T14:41:02.551429+0000 mgr.y (mgr.44103) 175 : cluster [DBG] pgmap v98: 161 pgs: 5 peering, 36 active+undersized, 19 active+undersized+degraded, 101 active+clean; 457 KiB data, 232 MiB used, 160 GiB / 160 GiB avail; 64/627 objects degraded (10.207%) 2026-03-09T14:41:06.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:06 vm11 bash[43577]: cluster 2026-03-09T14:41:04.552036+0000 mgr.y (mgr.44103) 176 : cluster [DBG] pgmap v99: 161 pgs: 5 peering, 7 active+undersized, 3 active+undersized+degraded, 146 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 3/627 objects degraded (0.478%) 2026-03-09T14:41:06.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:06 vm11 bash[43577]: cluster 2026-03-09T14:41:04.552036+0000 mgr.y (mgr.44103) 176 : cluster [DBG] pgmap v99: 161 pgs: 5 peering, 7 active+undersized, 3 active+undersized+degraded, 146 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 3/627 objects degraded (0.478%) 2026-03-09T14:41:06.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:06 vm07 bash[56315]: cluster 2026-03-09T14:41:04.552036+0000 mgr.y (mgr.44103) 176 : cluster [DBG] pgmap v99: 161 pgs: 5 peering, 7 active+undersized, 3 active+undersized+degraded, 146 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 3/627 objects degraded (0.478%) 2026-03-09T14:41:06.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:06 vm07 bash[56315]: cluster 2026-03-09T14:41:04.552036+0000 mgr.y (mgr.44103) 176 : cluster [DBG] pgmap v99: 161 pgs: 5 peering, 7 active+undersized, 3 active+undersized+degraded, 146 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 3/627 objects degraded (0.478%) 2026-03-09T14:41:06.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:06 vm07 bash[55244]: cluster 2026-03-09T14:41:04.552036+0000 mgr.y (mgr.44103) 176 : cluster [DBG] pgmap v99: 161 pgs: 5 peering, 7 active+undersized, 3 active+undersized+degraded, 146 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 3/627 objects degraded (0.478%) 2026-03-09T14:41:06.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:06 vm07 bash[55244]: cluster 2026-03-09T14:41:04.552036+0000 mgr.y (mgr.44103) 176 : cluster [DBG] pgmap v99: 161 pgs: 5 peering, 7 active+undersized, 3 active+undersized+degraded, 146 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 3/627 objects degraded (0.478%) 2026-03-09T14:41:07.252 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:06 vm11 bash[41290]: ts=2026-03-09T14:41:06.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:07.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:07 vm11 bash[43577]: cluster 2026-03-09T14:41:07.031375+0000 mon.a (mon.0) 437 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/627 objects degraded (0.478%), 3 pgs degraded) 2026-03-09T14:41:07.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:07 vm11 bash[43577]: cluster 2026-03-09T14:41:07.031375+0000 mon.a (mon.0) 437 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/627 objects degraded (0.478%), 3 pgs degraded) 2026-03-09T14:41:07.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:07 vm11 bash[43577]: cluster 2026-03-09T14:41:07.031391+0000 mon.a (mon.0) 438 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:07.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:07 vm11 bash[43577]: cluster 2026-03-09T14:41:07.031391+0000 mon.a (mon.0) 438 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:07.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:07 vm07 bash[56315]: cluster 2026-03-09T14:41:07.031375+0000 mon.a (mon.0) 437 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/627 objects degraded (0.478%), 3 pgs degraded) 2026-03-09T14:41:07.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:07 vm07 bash[56315]: cluster 2026-03-09T14:41:07.031375+0000 mon.a (mon.0) 437 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/627 objects degraded (0.478%), 3 pgs degraded) 2026-03-09T14:41:07.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:07 vm07 bash[56315]: cluster 2026-03-09T14:41:07.031391+0000 mon.a (mon.0) 438 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:07.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:07 vm07 bash[56315]: cluster 2026-03-09T14:41:07.031391+0000 mon.a (mon.0) 438 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:07.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:07 vm07 bash[55244]: cluster 2026-03-09T14:41:07.031375+0000 mon.a (mon.0) 437 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/627 objects degraded (0.478%), 3 pgs degraded) 2026-03-09T14:41:07.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:07 vm07 bash[55244]: cluster 2026-03-09T14:41:07.031375+0000 mon.a (mon.0) 437 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 3/627 objects degraded (0.478%), 3 pgs degraded) 2026-03-09T14:41:07.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:07 vm07 bash[55244]: cluster 2026-03-09T14:41:07.031391+0000 mon.a (mon.0) 438 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:07.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:07 vm07 bash[55244]: cluster 2026-03-09T14:41:07.031391+0000 mon.a (mon.0) 438 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:08.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:08 vm11 bash[43577]: cluster 2026-03-09T14:41:06.552428+0000 mgr.y (mgr.44103) 177 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 0 op/s 2026-03-09T14:41:08.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:08 vm11 bash[43577]: cluster 2026-03-09T14:41:06.552428+0000 mgr.y (mgr.44103) 177 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 0 op/s 2026-03-09T14:41:08.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:08 vm11 bash[43577]: audit 2026-03-09T14:41:07.575014+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:08.365 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:08 vm11 bash[43577]: audit 2026-03-09T14:41:07.575014+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:08.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:08 vm07 bash[56315]: cluster 2026-03-09T14:41:06.552428+0000 mgr.y (mgr.44103) 177 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 0 op/s 2026-03-09T14:41:08.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:08 vm07 bash[56315]: cluster 2026-03-09T14:41:06.552428+0000 mgr.y (mgr.44103) 177 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 0 op/s 2026-03-09T14:41:08.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:08 vm07 bash[56315]: audit 2026-03-09T14:41:07.575014+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:08.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:08 vm07 bash[56315]: audit 2026-03-09T14:41:07.575014+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:08.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:08 vm07 bash[55244]: cluster 2026-03-09T14:41:06.552428+0000 mgr.y (mgr.44103) 177 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 0 op/s 2026-03-09T14:41:08.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:08 vm07 bash[55244]: cluster 2026-03-09T14:41:06.552428+0000 mgr.y (mgr.44103) 177 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 390 B/s rd, 0 op/s 2026-03-09T14:41:08.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:08 vm07 bash[55244]: audit 2026-03-09T14:41:07.575014+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:08.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:08 vm07 bash[55244]: audit 2026-03-09T14:41:07.575014+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:07.546057+0000 mgr.y (mgr.44103) 178 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:07.546057+0000 mgr.y (mgr.44103) 178 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.440917+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.440917+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.444889+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.444889+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.445605+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.445605+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.446014+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.446014+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.449830+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.449830+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.494205+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.494205+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.495304+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.495304+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.496038+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.496038+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.496616+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.496616+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.477 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.497380+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:09.478 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.497380+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:09.478 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.944794+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.478 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.944794+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.478 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.949375+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:41:09.478 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.949375+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:41:09.478 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.949910+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.478 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 bash[43577]: audit 2026-03-09T14:41:08.949910+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.739 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:09.739 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:41:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:09.739 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:41:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:09.740 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:09.740 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:41:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:09.740 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:41:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:09.740 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:41:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:09.740 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:09.740 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:09 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:07.546057+0000 mgr.y (mgr.44103) 178 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:07.546057+0000 mgr.y (mgr.44103) 178 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.440917+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.440917+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.444889+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.444889+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.445605+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.445605+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.446014+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.446014+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.449830+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.449830+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.494205+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.494205+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.495304+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.495304+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.496038+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.496038+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.496616+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.496616+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.497380+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.497380+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.944794+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.944794+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.949375+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.949375+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:41:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.949910+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:09 vm07 bash[56315]: audit 2026-03-09T14:41:08.949910+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:07.546057+0000 mgr.y (mgr.44103) 178 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:07.546057+0000 mgr.y (mgr.44103) 178 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.440917+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.440917+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.444889+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.444889+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.445605+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.445605+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.446014+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.446014+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.449830+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.449830+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.494205+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.494205+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.495304+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.495304+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.496038+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.496038+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.496616+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.496616+0000 mon.a (mon.0) 448 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.497380+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.497380+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.944794+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.944794+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.949375+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.949375+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.949910+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:09 vm07 bash[55244]: audit 2026-03-09T14:41:08.949910+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:10.003 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:09 vm11 systemd[1]: Stopping Ceph osd.6 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:41:10.003 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:09 vm11 bash[27120]: debug 2026-03-09T14:41:09.779+0000 7f07f7fbe700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:41:10.003 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:09 vm11 bash[27120]: debug 2026-03-09T14:41:09.779+0000 7f07f7fbe700 -1 osd.6 122 *** Got signal Terminated *** 2026-03-09T14:41:10.003 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:09 vm11 bash[27120]: debug 2026-03-09T14:41:09.779+0000 7f07f7fbe700 -1 osd.6 122 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:41:10.753 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:10 vm11 bash[49295]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-6 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: audit 2026-03-09T14:41:08.497517+0000 mgr.y (mgr.44103) 179 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: audit 2026-03-09T14:41:08.497517+0000 mgr.y (mgr.44103) 179 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: cephadm 2026-03-09T14:41:08.498084+0000 mgr.y (mgr.44103) 180 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: cephadm 2026-03-09T14:41:08.498084+0000 mgr.y (mgr.44103) 180 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: cluster 2026-03-09T14:41:08.552794+0000 mgr.y (mgr.44103) 181 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: cluster 2026-03-09T14:41:08.552794+0000 mgr.y (mgr.44103) 181 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: cephadm 2026-03-09T14:41:08.940165+0000 mgr.y (mgr.44103) 182 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: cephadm 2026-03-09T14:41:08.940165+0000 mgr.y (mgr.44103) 182 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: cephadm 2026-03-09T14:41:08.951411+0000 mgr.y (mgr.44103) 183 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: cephadm 2026-03-09T14:41:08.951411+0000 mgr.y (mgr.44103) 183 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: cluster 2026-03-09T14:41:09.788440+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T14:41:10.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 bash[43577]: cluster 2026-03-09T14:41:09.788440+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: audit 2026-03-09T14:41:08.497517+0000 mgr.y (mgr.44103) 179 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: audit 2026-03-09T14:41:08.497517+0000 mgr.y (mgr.44103) 179 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: cephadm 2026-03-09T14:41:08.498084+0000 mgr.y (mgr.44103) 180 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: cephadm 2026-03-09T14:41:08.498084+0000 mgr.y (mgr.44103) 180 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: cluster 2026-03-09T14:41:08.552794+0000 mgr.y (mgr.44103) 181 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: cluster 2026-03-09T14:41:08.552794+0000 mgr.y (mgr.44103) 181 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: cephadm 2026-03-09T14:41:08.940165+0000 mgr.y (mgr.44103) 182 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: cephadm 2026-03-09T14:41:08.940165+0000 mgr.y (mgr.44103) 182 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: cephadm 2026-03-09T14:41:08.951411+0000 mgr.y (mgr.44103) 183 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: cephadm 2026-03-09T14:41:08.951411+0000 mgr.y (mgr.44103) 183 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: cluster 2026-03-09T14:41:09.788440+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:10 vm07 bash[55244]: cluster 2026-03-09T14:41:09.788440+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T14:41:10.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: audit 2026-03-09T14:41:08.497517+0000 mgr.y (mgr.44103) 179 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: audit 2026-03-09T14:41:08.497517+0000 mgr.y (mgr.44103) 179 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: cephadm 2026-03-09T14:41:08.498084+0000 mgr.y (mgr.44103) 180 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: cephadm 2026-03-09T14:41:08.498084+0000 mgr.y (mgr.44103) 180 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: cluster 2026-03-09T14:41:08.552794+0000 mgr.y (mgr.44103) 181 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: cluster 2026-03-09T14:41:08.552794+0000 mgr.y (mgr.44103) 181 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: cephadm 2026-03-09T14:41:08.940165+0000 mgr.y (mgr.44103) 182 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: cephadm 2026-03-09T14:41:08.940165+0000 mgr.y (mgr.44103) 182 : cephadm [INF] Upgrade: Updating osd.6 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: cephadm 2026-03-09T14:41:08.951411+0000 mgr.y (mgr.44103) 183 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: cephadm 2026-03-09T14:41:08.951411+0000 mgr.y (mgr.44103) 183 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: cluster 2026-03-09T14:41:09.788440+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T14:41:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:10 vm07 bash[56315]: cluster 2026-03-09T14:41:09.788440+0000 mon.a (mon.0) 453 : cluster [INF] osd.6 marked itself down and dead 2026-03-09T14:41:11.095 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:11.096 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:11.096 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:11.097 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.6.service: Deactivated successfully. 2026-03-09T14:41:11.097 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: Stopped Ceph osd.6 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:41:11.097 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:11.097 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:11 vm11 systemd[1]: Started Ceph osd.6 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:41:11.097 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:11.097 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:11.097 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:11.098 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:11.098 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:10 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:11.454 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:11 vm11 bash[49502]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:41:11.454 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:11 vm11 bash[49502]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:41:11.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:11 vm11 bash[43577]: cluster 2026-03-09T14:41:10.458017+0000 mon.a (mon.0) 454 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:11.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:11 vm11 bash[43577]: cluster 2026-03-09T14:41:10.458017+0000 mon.a (mon.0) 454 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:11.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:11 vm11 bash[43577]: cluster 2026-03-09T14:41:10.475137+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T14:41:11.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:11 vm11 bash[43577]: cluster 2026-03-09T14:41:10.475137+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T14:41:11.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:11 vm11 bash[43577]: audit 2026-03-09T14:41:11.093547+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:11 vm11 bash[43577]: audit 2026-03-09T14:41:11.093547+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:11 vm11 bash[43577]: audit 2026-03-09T14:41:11.101118+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:11 vm11 bash[43577]: audit 2026-03-09T14:41:11.101118+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:11 vm07 bash[56315]: cluster 2026-03-09T14:41:10.458017+0000 mon.a (mon.0) 454 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:11 vm07 bash[56315]: cluster 2026-03-09T14:41:10.458017+0000 mon.a (mon.0) 454 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:11 vm07 bash[56315]: cluster 2026-03-09T14:41:10.475137+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:11 vm07 bash[56315]: cluster 2026-03-09T14:41:10.475137+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:11 vm07 bash[56315]: audit 2026-03-09T14:41:11.093547+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:11 vm07 bash[56315]: audit 2026-03-09T14:41:11.093547+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:11 vm07 bash[56315]: audit 2026-03-09T14:41:11.101118+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:11 vm07 bash[56315]: audit 2026-03-09T14:41:11.101118+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:11 vm07 bash[55244]: cluster 2026-03-09T14:41:10.458017+0000 mon.a (mon.0) 454 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:11 vm07 bash[55244]: cluster 2026-03-09T14:41:10.458017+0000 mon.a (mon.0) 454 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:11 vm07 bash[55244]: cluster 2026-03-09T14:41:10.475137+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:11 vm07 bash[55244]: cluster 2026-03-09T14:41:10.475137+0000 mon.a (mon.0) 455 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:11 vm07 bash[55244]: audit 2026-03-09T14:41:11.093547+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:11 vm07 bash[55244]: audit 2026-03-09T14:41:11.093547+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:11 vm07 bash[55244]: audit 2026-03-09T14:41:11.101118+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:11.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:11 vm07 bash[55244]: audit 2026-03-09T14:41:11.101118+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:12.456 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:12 vm11 bash[49502]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T14:41:12.457 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:12 vm11 bash[49502]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:41:12.457 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:12 vm11 bash[49502]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:41:12.457 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:12 vm11 bash[49502]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-03-09T14:41:12.457 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:12 vm11 bash[49502]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-516d0daf-81ad-45a1-9e33-0c13dd09a428/osd-block-77a63107-dca7-4e61-85ab-633ea82bcb7d --path /var/lib/ceph/osd/ceph-6 --no-mon-config 2026-03-09T14:41:12.754 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:12 vm11 bash[49502]: Running command: /usr/bin/ln -snf /dev/ceph-516d0daf-81ad-45a1-9e33-0c13dd09a428/osd-block-77a63107-dca7-4e61-85ab-633ea82bcb7d /var/lib/ceph/osd/ceph-6/block 2026-03-09T14:41:12.754 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:12 vm11 bash[49502]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-6/block 2026-03-09T14:41:12.754 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:12 vm11 bash[49502]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-09T14:41:12.754 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:12 vm11 bash[49502]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-03-09T14:41:12.754 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:12 vm11 bash[49502]: --> ceph-volume lvm activate successful for osd ID: 6 2026-03-09T14:41:12.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:12 vm11 bash[43577]: cluster 2026-03-09T14:41:10.553080+0000 mgr.y (mgr.44103) 184 : cluster [DBG] pgmap v103: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 951 B/s rd, 0 op/s 2026-03-09T14:41:12.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:12 vm11 bash[43577]: cluster 2026-03-09T14:41:10.553080+0000 mgr.y (mgr.44103) 184 : cluster [DBG] pgmap v103: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 951 B/s rd, 0 op/s 2026-03-09T14:41:12.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:12 vm11 bash[43577]: cluster 2026-03-09T14:41:11.488172+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T14:41:12.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:12 vm11 bash[43577]: cluster 2026-03-09T14:41:11.488172+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T14:41:12.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:12 vm07 bash[56315]: cluster 2026-03-09T14:41:10.553080+0000 mgr.y (mgr.44103) 184 : cluster [DBG] pgmap v103: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 951 B/s rd, 0 op/s 2026-03-09T14:41:12.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:12 vm07 bash[56315]: cluster 2026-03-09T14:41:10.553080+0000 mgr.y (mgr.44103) 184 : cluster [DBG] pgmap v103: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 951 B/s rd, 0 op/s 2026-03-09T14:41:12.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:12 vm07 bash[56315]: cluster 2026-03-09T14:41:11.488172+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T14:41:12.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:12 vm07 bash[56315]: cluster 2026-03-09T14:41:11.488172+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T14:41:12.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:12 vm07 bash[55244]: cluster 2026-03-09T14:41:10.553080+0000 mgr.y (mgr.44103) 184 : cluster [DBG] pgmap v103: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 951 B/s rd, 0 op/s 2026-03-09T14:41:12.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:12 vm07 bash[55244]: cluster 2026-03-09T14:41:10.553080+0000 mgr.y (mgr.44103) 184 : cluster [DBG] pgmap v103: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 951 B/s rd, 0 op/s 2026-03-09T14:41:12.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:12 vm07 bash[55244]: cluster 2026-03-09T14:41:11.488172+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T14:41:12.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:12 vm07 bash[55244]: cluster 2026-03-09T14:41:11.488172+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:13 vm07 bash[55244]: cluster 2026-03-09T14:41:12.553584+0000 mgr.y (mgr.44103) 185 : cluster [DBG] pgmap v105: 161 pgs: 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 4 stale+active+clean, 137 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:13 vm07 bash[55244]: cluster 2026-03-09T14:41:12.553584+0000 mgr.y (mgr.44103) 185 : cluster [DBG] pgmap v105: 161 pgs: 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 4 stale+active+clean, 137 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:13 vm07 bash[55244]: audit 2026-03-09T14:41:12.634222+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:13 vm07 bash[55244]: audit 2026-03-09T14:41:12.634222+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:13 vm07 bash[55244]: cluster 2026-03-09T14:41:13.464762+0000 mon.a (mon.0) 460 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:13 vm07 bash[55244]: cluster 2026-03-09T14:41:13.464762+0000 mon.a (mon.0) 460 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:13 vm07 bash[55244]: cluster 2026-03-09T14:41:13.464785+0000 mon.a (mon.0) 461 : cluster [WRN] Health check failed: Degraded data redundancy: 2/627 objects degraded (0.319%), 1 pg degraded (PG_DEGRADED) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:13 vm07 bash[55244]: cluster 2026-03-09T14:41:13.464785+0000 mon.a (mon.0) 461 : cluster [WRN] Health check failed: Degraded data redundancy: 2/627 objects degraded (0.319%), 1 pg degraded (PG_DEGRADED) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:13 vm07 bash[56315]: cluster 2026-03-09T14:41:12.553584+0000 mgr.y (mgr.44103) 185 : cluster [DBG] pgmap v105: 161 pgs: 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 4 stale+active+clean, 137 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:13 vm07 bash[56315]: cluster 2026-03-09T14:41:12.553584+0000 mgr.y (mgr.44103) 185 : cluster [DBG] pgmap v105: 161 pgs: 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 4 stale+active+clean, 137 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:13 vm07 bash[56315]: audit 2026-03-09T14:41:12.634222+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:13 vm07 bash[56315]: audit 2026-03-09T14:41:12.634222+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:13 vm07 bash[56315]: cluster 2026-03-09T14:41:13.464762+0000 mon.a (mon.0) 460 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:13 vm07 bash[56315]: cluster 2026-03-09T14:41:13.464762+0000 mon.a (mon.0) 460 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:13 vm07 bash[56315]: cluster 2026-03-09T14:41:13.464785+0000 mon.a (mon.0) 461 : cluster [WRN] Health check failed: Degraded data redundancy: 2/627 objects degraded (0.319%), 1 pg degraded (PG_DEGRADED) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:13 vm07 bash[56315]: cluster 2026-03-09T14:41:13.464785+0000 mon.a (mon.0) 461 : cluster [WRN] Health check failed: Degraded data redundancy: 2/627 objects degraded (0.319%), 1 pg degraded (PG_DEGRADED) 2026-03-09T14:41:13.904 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:41:13 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:41:13] "GET /metrics HTTP/1.1" 200 38093 "" "Prometheus/2.51.0" 2026-03-09T14:41:14.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:13 vm11 bash[43577]: cluster 2026-03-09T14:41:12.553584+0000 mgr.y (mgr.44103) 185 : cluster [DBG] pgmap v105: 161 pgs: 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 4 stale+active+clean, 137 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:41:14.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:13 vm11 bash[43577]: cluster 2026-03-09T14:41:12.553584+0000 mgr.y (mgr.44103) 185 : cluster [DBG] pgmap v105: 161 pgs: 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 4 stale+active+clean, 137 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s; 2/627 objects degraded (0.319%) 2026-03-09T14:41:14.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:13 vm11 bash[43577]: audit 2026-03-09T14:41:12.634222+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:14.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:13 vm11 bash[43577]: audit 2026-03-09T14:41:12.634222+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:14.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:13 vm11 bash[43577]: cluster 2026-03-09T14:41:13.464762+0000 mon.a (mon.0) 460 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-09T14:41:14.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:13 vm11 bash[43577]: cluster 2026-03-09T14:41:13.464762+0000 mon.a (mon.0) 460 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-09T14:41:14.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:13 vm11 bash[43577]: cluster 2026-03-09T14:41:13.464785+0000 mon.a (mon.0) 461 : cluster [WRN] Health check failed: Degraded data redundancy: 2/627 objects degraded (0.319%), 1 pg degraded (PG_DEGRADED) 2026-03-09T14:41:14.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:13 vm11 bash[43577]: cluster 2026-03-09T14:41:13.464785+0000 mon.a (mon.0) 461 : cluster [WRN] Health check failed: Degraded data redundancy: 2/627 objects degraded (0.319%), 1 pg degraded (PG_DEGRADED) 2026-03-09T14:41:14.503 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:14 vm11 bash[49856]: debug 2026-03-09T14:41:14.087+0000 7f9f3d199740 -1 Falling back to public interface 2026-03-09T14:41:14.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:14 vm11 bash[41290]: ts=2026-03-09T14:41:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.6\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.6\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.6\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:15.631 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:15 vm11 bash[49856]: debug 2026-03-09T14:41:15.303+0000 7f9f3d199740 -1 osd.6 0 read_superblock omap replica is missing. 2026-03-09T14:41:15.631 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:15 vm11 bash[49856]: debug 2026-03-09T14:41:15.315+0000 7f9f3d199740 -1 osd.6 122 log_to_monitors true 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:15 vm07 bash[56315]: cluster 2026-03-09T14:41:14.554079+0000 mgr.y (mgr.44103) 186 : cluster [DBG] pgmap v106: 161 pgs: 19 active+undersized, 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 14 active+undersized+degraded, 108 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 52/627 objects degraded (8.293%) 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:15 vm07 bash[56315]: cluster 2026-03-09T14:41:14.554079+0000 mgr.y (mgr.44103) 186 : cluster [DBG] pgmap v106: 161 pgs: 19 active+undersized, 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 14 active+undersized+degraded, 108 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 52/627 objects degraded (8.293%) 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:15 vm07 bash[56315]: audit 2026-03-09T14:41:15.323156+0000 mon.b (mon.2) 7 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:15 vm07 bash[56315]: audit 2026-03-09T14:41:15.323156+0000 mon.b (mon.2) 7 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:15 vm07 bash[56315]: audit 2026-03-09T14:41:15.328530+0000 mon.a (mon.0) 462 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:15 vm07 bash[56315]: audit 2026-03-09T14:41:15.328530+0000 mon.a (mon.0) 462 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:15 vm07 bash[55244]: cluster 2026-03-09T14:41:14.554079+0000 mgr.y (mgr.44103) 186 : cluster [DBG] pgmap v106: 161 pgs: 19 active+undersized, 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 14 active+undersized+degraded, 108 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 52/627 objects degraded (8.293%) 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:15 vm07 bash[55244]: cluster 2026-03-09T14:41:14.554079+0000 mgr.y (mgr.44103) 186 : cluster [DBG] pgmap v106: 161 pgs: 19 active+undersized, 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 14 active+undersized+degraded, 108 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 52/627 objects degraded (8.293%) 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:15 vm07 bash[55244]: audit 2026-03-09T14:41:15.323156+0000 mon.b (mon.2) 7 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:15 vm07 bash[55244]: audit 2026-03-09T14:41:15.323156+0000 mon.b (mon.2) 7 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:15 vm07 bash[55244]: audit 2026-03-09T14:41:15.328530+0000 mon.a (mon.0) 462 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:15.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:15 vm07 bash[55244]: audit 2026-03-09T14:41:15.328530+0000 mon.a (mon.0) 462 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:16.002 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:15 vm11 bash[49856]: debug 2026-03-09T14:41:15.663+0000 7f9f34f44640 -1 osd.6 122 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:41:16.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:15 vm11 bash[43577]: cluster 2026-03-09T14:41:14.554079+0000 mgr.y (mgr.44103) 186 : cluster [DBG] pgmap v106: 161 pgs: 19 active+undersized, 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 14 active+undersized+degraded, 108 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 52/627 objects degraded (8.293%) 2026-03-09T14:41:16.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:15 vm11 bash[43577]: cluster 2026-03-09T14:41:14.554079+0000 mgr.y (mgr.44103) 186 : cluster [DBG] pgmap v106: 161 pgs: 19 active+undersized, 2 activating+undersized, 1 activating+undersized+degraded, 17 peering, 14 active+undersized+degraded, 108 active+clean; 457 KiB data, 221 MiB used, 160 GiB / 160 GiB avail; 52/627 objects degraded (8.293%) 2026-03-09T14:41:16.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:15 vm11 bash[43577]: audit 2026-03-09T14:41:15.323156+0000 mon.b (mon.2) 7 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:16.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:15 vm11 bash[43577]: audit 2026-03-09T14:41:15.323156+0000 mon.b (mon.2) 7 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:16.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:15 vm11 bash[43577]: audit 2026-03-09T14:41:15.328530+0000 mon.a (mon.0) 462 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:16.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:15 vm11 bash[43577]: audit 2026-03-09T14:41:15.328530+0000 mon.a (mon.0) 462 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:41:16.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:16 vm11 bash[43577]: audit 2026-03-09T14:41:15.645499+0000 mon.a (mon.0) 463 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:41:16.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:16 vm11 bash[43577]: audit 2026-03-09T14:41:15.645499+0000 mon.a (mon.0) 463 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:41:16.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:16 vm11 bash[43577]: audit 2026-03-09T14:41:15.646049+0000 mon.b (mon.2) 8 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:16.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:16 vm11 bash[43577]: audit 2026-03-09T14:41:15.646049+0000 mon.b (mon.2) 8 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:16.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:16 vm11 bash[43577]: cluster 2026-03-09T14:41:15.648626+0000 mon.a (mon.0) 464 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T14:41:16.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:16 vm11 bash[43577]: cluster 2026-03-09T14:41:15.648626+0000 mon.a (mon.0) 464 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T14:41:16.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:16 vm11 bash[43577]: audit 2026-03-09T14:41:15.651289+0000 mon.a (mon.0) 465 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:16.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:16 vm11 bash[43577]: audit 2026-03-09T14:41:15.651289+0000 mon.a (mon.0) 465 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:16 vm07 bash[56315]: audit 2026-03-09T14:41:15.645499+0000 mon.a (mon.0) 463 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:16 vm07 bash[56315]: audit 2026-03-09T14:41:15.645499+0000 mon.a (mon.0) 463 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:16 vm07 bash[56315]: audit 2026-03-09T14:41:15.646049+0000 mon.b (mon.2) 8 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:16 vm07 bash[56315]: audit 2026-03-09T14:41:15.646049+0000 mon.b (mon.2) 8 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:16 vm07 bash[56315]: cluster 2026-03-09T14:41:15.648626+0000 mon.a (mon.0) 464 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:16 vm07 bash[56315]: cluster 2026-03-09T14:41:15.648626+0000 mon.a (mon.0) 464 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:16 vm07 bash[56315]: audit 2026-03-09T14:41:15.651289+0000 mon.a (mon.0) 465 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:16 vm07 bash[56315]: audit 2026-03-09T14:41:15.651289+0000 mon.a (mon.0) 465 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:16 vm07 bash[55244]: audit 2026-03-09T14:41:15.645499+0000 mon.a (mon.0) 463 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:16 vm07 bash[55244]: audit 2026-03-09T14:41:15.645499+0000 mon.a (mon.0) 463 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:16 vm07 bash[55244]: audit 2026-03-09T14:41:15.646049+0000 mon.b (mon.2) 8 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:16 vm07 bash[55244]: audit 2026-03-09T14:41:15.646049+0000 mon.b (mon.2) 8 : audit [INF] from='osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:16 vm07 bash[55244]: cluster 2026-03-09T14:41:15.648626+0000 mon.a (mon.0) 464 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:16 vm07 bash[55244]: cluster 2026-03-09T14:41:15.648626+0000 mon.a (mon.0) 464 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:16 vm07 bash[55244]: audit 2026-03-09T14:41:15.651289+0000 mon.a (mon.0) 465 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:16 vm07 bash[55244]: audit 2026-03-09T14:41:15.651289+0000 mon.a (mon.0) 465 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:17.235 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:16 vm11 bash[41290]: ts=2026-03-09T14:41:16.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: cluster 2026-03-09T14:41:16.554479+0000 mgr.y (mgr.44103) 187 : cluster [DBG] pgmap v108: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 76/627 objects degraded (12.121%) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: cluster 2026-03-09T14:41:16.554479+0000 mgr.y (mgr.44103) 187 : cluster [DBG] pgmap v108: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 76/627 objects degraded (12.121%) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: cluster 2026-03-09T14:41:16.646437+0000 mon.a (mon.0) 466 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: cluster 2026-03-09T14:41:16.646437+0000 mon.a (mon.0) 466 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: cluster 2026-03-09T14:41:16.647040+0000 mon.a (mon.0) 467 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 3 pgs peering) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: cluster 2026-03-09T14:41:16.647040+0000 mon.a (mon.0) 467 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 3 pgs peering) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: cluster 2026-03-09T14:41:16.651894+0000 mon.a (mon.0) 468 : cluster [INF] osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591] boot 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: cluster 2026-03-09T14:41:16.651894+0000 mon.a (mon.0) 468 : cluster [INF] osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591] boot 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: cluster 2026-03-09T14:41:16.651975+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: cluster 2026-03-09T14:41:16.651975+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: audit 2026-03-09T14:41:16.653718+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: audit 2026-03-09T14:41:16.653718+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: audit 2026-03-09T14:41:17.412650+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: audit 2026-03-09T14:41:17.412650+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: audit 2026-03-09T14:41:17.418683+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:17 vm07 bash[56315]: audit 2026-03-09T14:41:17.418683+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: cluster 2026-03-09T14:41:16.554479+0000 mgr.y (mgr.44103) 187 : cluster [DBG] pgmap v108: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 76/627 objects degraded (12.121%) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: cluster 2026-03-09T14:41:16.554479+0000 mgr.y (mgr.44103) 187 : cluster [DBG] pgmap v108: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 76/627 objects degraded (12.121%) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: cluster 2026-03-09T14:41:16.646437+0000 mon.a (mon.0) 466 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: cluster 2026-03-09T14:41:16.646437+0000 mon.a (mon.0) 466 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: cluster 2026-03-09T14:41:16.647040+0000 mon.a (mon.0) 467 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 3 pgs peering) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: cluster 2026-03-09T14:41:16.647040+0000 mon.a (mon.0) 467 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 3 pgs peering) 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: cluster 2026-03-09T14:41:16.651894+0000 mon.a (mon.0) 468 : cluster [INF] osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591] boot 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: cluster 2026-03-09T14:41:16.651894+0000 mon.a (mon.0) 468 : cluster [INF] osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591] boot 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: cluster 2026-03-09T14:41:16.651975+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: cluster 2026-03-09T14:41:16.651975+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: audit 2026-03-09T14:41:16.653718+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: audit 2026-03-09T14:41:16.653718+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: audit 2026-03-09T14:41:17.412650+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: audit 2026-03-09T14:41:17.412650+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: audit 2026-03-09T14:41:17.418683+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:17.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:17 vm07 bash[55244]: audit 2026-03-09T14:41:17.418683+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: cluster 2026-03-09T14:41:16.554479+0000 mgr.y (mgr.44103) 187 : cluster [DBG] pgmap v108: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 76/627 objects degraded (12.121%) 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: cluster 2026-03-09T14:41:16.554479+0000 mgr.y (mgr.44103) 187 : cluster [DBG] pgmap v108: 161 pgs: 33 active+undersized, 20 active+undersized+degraded, 108 active+clean; 457 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 76/627 objects degraded (12.121%) 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: cluster 2026-03-09T14:41:16.646437+0000 mon.a (mon.0) 466 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: cluster 2026-03-09T14:41:16.646437+0000 mon.a (mon.0) 466 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: cluster 2026-03-09T14:41:16.647040+0000 mon.a (mon.0) 467 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 3 pgs peering) 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: cluster 2026-03-09T14:41:16.647040+0000 mon.a (mon.0) 467 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 3 pgs peering) 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: cluster 2026-03-09T14:41:16.651894+0000 mon.a (mon.0) 468 : cluster [INF] osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591] boot 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: cluster 2026-03-09T14:41:16.651894+0000 mon.a (mon.0) 468 : cluster [INF] osd.6 [v2:192.168.123.111:6816/2386486591,v1:192.168.123.111:6817/2386486591] boot 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: cluster 2026-03-09T14:41:16.651975+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: cluster 2026-03-09T14:41:16.651975+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: audit 2026-03-09T14:41:16.653718+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: audit 2026-03-09T14:41:16.653718+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: audit 2026-03-09T14:41:17.412650+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: audit 2026-03-09T14:41:17.412650+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: audit 2026-03-09T14:41:17.418683+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:18.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:17 vm11 bash[43577]: audit 2026-03-09T14:41:17.418683+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:18 vm11 bash[43577]: audit 2026-03-09T14:41:17.548489+0000 mgr.y (mgr.44103) 188 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:19.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:18 vm11 bash[43577]: audit 2026-03-09T14:41:17.548489+0000 mgr.y (mgr.44103) 188 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:19.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:18 vm11 bash[43577]: cluster 2026-03-09T14:41:17.679085+0000 mon.a (mon.0) 473 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T14:41:19.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:18 vm11 bash[43577]: cluster 2026-03-09T14:41:17.679085+0000 mon.a (mon.0) 473 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T14:41:19.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:18 vm11 bash[43577]: audit 2026-03-09T14:41:18.001108+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:18 vm11 bash[43577]: audit 2026-03-09T14:41:18.001108+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:18 vm11 bash[43577]: audit 2026-03-09T14:41:18.010559+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:18 vm11 bash[43577]: audit 2026-03-09T14:41:18.010559+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:18 vm07 bash[55244]: audit 2026-03-09T14:41:17.548489+0000 mgr.y (mgr.44103) 188 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:18 vm07 bash[55244]: audit 2026-03-09T14:41:17.548489+0000 mgr.y (mgr.44103) 188 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:18 vm07 bash[55244]: cluster 2026-03-09T14:41:17.679085+0000 mon.a (mon.0) 473 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:18 vm07 bash[55244]: cluster 2026-03-09T14:41:17.679085+0000 mon.a (mon.0) 473 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:18 vm07 bash[55244]: audit 2026-03-09T14:41:18.001108+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:18 vm07 bash[55244]: audit 2026-03-09T14:41:18.001108+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:18 vm07 bash[55244]: audit 2026-03-09T14:41:18.010559+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:18 vm07 bash[55244]: audit 2026-03-09T14:41:18.010559+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:18 vm07 bash[56315]: audit 2026-03-09T14:41:17.548489+0000 mgr.y (mgr.44103) 188 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:18 vm07 bash[56315]: audit 2026-03-09T14:41:17.548489+0000 mgr.y (mgr.44103) 188 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:18 vm07 bash[56315]: cluster 2026-03-09T14:41:17.679085+0000 mon.a (mon.0) 473 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:18 vm07 bash[56315]: cluster 2026-03-09T14:41:17.679085+0000 mon.a (mon.0) 473 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:18 vm07 bash[56315]: audit 2026-03-09T14:41:18.001108+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:18 vm07 bash[56315]: audit 2026-03-09T14:41:18.001108+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:18 vm07 bash[56315]: audit 2026-03-09T14:41:18.010559+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:19.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:18 vm07 bash[56315]: audit 2026-03-09T14:41:18.010559+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:20.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:19 vm11 bash[43577]: cluster 2026-03-09T14:41:18.554825+0000 mgr.y (mgr.44103) 189 : cluster [DBG] pgmap v111: 161 pgs: 22 active+undersized, 13 active+undersized+degraded, 126 active+clean; 457 KiB data, 240 MiB used, 160 GiB / 160 GiB avail; 49/627 objects degraded (7.815%) 2026-03-09T14:41:20.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:19 vm11 bash[43577]: cluster 2026-03-09T14:41:18.554825+0000 mgr.y (mgr.44103) 189 : cluster [DBG] pgmap v111: 161 pgs: 22 active+undersized, 13 active+undersized+degraded, 126 active+clean; 457 KiB data, 240 MiB used, 160 GiB / 160 GiB avail; 49/627 objects degraded (7.815%) 2026-03-09T14:41:20.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:19 vm11 bash[43577]: cluster 2026-03-09T14:41:19.009262+0000 mon.a (mon.0) 476 : cluster [WRN] Health check update: Degraded data redundancy: 49/627 objects degraded (7.815%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:20.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:19 vm11 bash[43577]: cluster 2026-03-09T14:41:19.009262+0000 mon.a (mon.0) 476 : cluster [WRN] Health check update: Degraded data redundancy: 49/627 objects degraded (7.815%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:20.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:19 vm07 bash[56315]: cluster 2026-03-09T14:41:18.554825+0000 mgr.y (mgr.44103) 189 : cluster [DBG] pgmap v111: 161 pgs: 22 active+undersized, 13 active+undersized+degraded, 126 active+clean; 457 KiB data, 240 MiB used, 160 GiB / 160 GiB avail; 49/627 objects degraded (7.815%) 2026-03-09T14:41:20.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:19 vm07 bash[56315]: cluster 2026-03-09T14:41:18.554825+0000 mgr.y (mgr.44103) 189 : cluster [DBG] pgmap v111: 161 pgs: 22 active+undersized, 13 active+undersized+degraded, 126 active+clean; 457 KiB data, 240 MiB used, 160 GiB / 160 GiB avail; 49/627 objects degraded (7.815%) 2026-03-09T14:41:20.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:19 vm07 bash[56315]: cluster 2026-03-09T14:41:19.009262+0000 mon.a (mon.0) 476 : cluster [WRN] Health check update: Degraded data redundancy: 49/627 objects degraded (7.815%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:20.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:19 vm07 bash[56315]: cluster 2026-03-09T14:41:19.009262+0000 mon.a (mon.0) 476 : cluster [WRN] Health check update: Degraded data redundancy: 49/627 objects degraded (7.815%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:20.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:19 vm07 bash[55244]: cluster 2026-03-09T14:41:18.554825+0000 mgr.y (mgr.44103) 189 : cluster [DBG] pgmap v111: 161 pgs: 22 active+undersized, 13 active+undersized+degraded, 126 active+clean; 457 KiB data, 240 MiB used, 160 GiB / 160 GiB avail; 49/627 objects degraded (7.815%) 2026-03-09T14:41:20.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:19 vm07 bash[55244]: cluster 2026-03-09T14:41:18.554825+0000 mgr.y (mgr.44103) 189 : cluster [DBG] pgmap v111: 161 pgs: 22 active+undersized, 13 active+undersized+degraded, 126 active+clean; 457 KiB data, 240 MiB used, 160 GiB / 160 GiB avail; 49/627 objects degraded (7.815%) 2026-03-09T14:41:20.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:19 vm07 bash[55244]: cluster 2026-03-09T14:41:19.009262+0000 mon.a (mon.0) 476 : cluster [WRN] Health check update: Degraded data redundancy: 49/627 objects degraded (7.815%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:20.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:19 vm07 bash[55244]: cluster 2026-03-09T14:41:19.009262+0000 mon.a (mon.0) 476 : cluster [WRN] Health check update: Degraded data redundancy: 49/627 objects degraded (7.815%), 13 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:22.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:21 vm11 bash[43577]: cluster 2026-03-09T14:41:20.555192+0000 mgr.y (mgr.44103) 190 : cluster [DBG] pgmap v112: 161 pgs: 15 active+undersized, 9 active+undersized+degraded, 137 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 29/627 objects degraded (4.625%) 2026-03-09T14:41:22.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:21 vm11 bash[43577]: cluster 2026-03-09T14:41:20.555192+0000 mgr.y (mgr.44103) 190 : cluster [DBG] pgmap v112: 161 pgs: 15 active+undersized, 9 active+undersized+degraded, 137 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 29/627 objects degraded (4.625%) 2026-03-09T14:41:22.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:21 vm07 bash[56315]: cluster 2026-03-09T14:41:20.555192+0000 mgr.y (mgr.44103) 190 : cluster [DBG] pgmap v112: 161 pgs: 15 active+undersized, 9 active+undersized+degraded, 137 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 29/627 objects degraded (4.625%) 2026-03-09T14:41:22.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:21 vm07 bash[56315]: cluster 2026-03-09T14:41:20.555192+0000 mgr.y (mgr.44103) 190 : cluster [DBG] pgmap v112: 161 pgs: 15 active+undersized, 9 active+undersized+degraded, 137 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 29/627 objects degraded (4.625%) 2026-03-09T14:41:22.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:21 vm07 bash[55244]: cluster 2026-03-09T14:41:20.555192+0000 mgr.y (mgr.44103) 190 : cluster [DBG] pgmap v112: 161 pgs: 15 active+undersized, 9 active+undersized+degraded, 137 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 29/627 objects degraded (4.625%) 2026-03-09T14:41:22.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:21 vm07 bash[55244]: cluster 2026-03-09T14:41:20.555192+0000 mgr.y (mgr.44103) 190 : cluster [DBG] pgmap v112: 161 pgs: 15 active+undersized, 9 active+undersized+degraded, 137 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 29/627 objects degraded (4.625%) 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:23 vm07 bash[56315]: cluster 2026-03-09T14:41:22.555609+0000 mgr.y (mgr.44103) 191 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:23 vm07 bash[56315]: cluster 2026-03-09T14:41:22.555609+0000 mgr.y (mgr.44103) 191 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:23 vm07 bash[56315]: audit 2026-03-09T14:41:22.579017+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:23 vm07 bash[56315]: audit 2026-03-09T14:41:22.579017+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:23 vm07 bash[56315]: audit 2026-03-09T14:41:22.579869+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:23 vm07 bash[56315]: audit 2026-03-09T14:41:22.579869+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:23 vm07 bash[56315]: cluster 2026-03-09T14:41:22.706137+0000 mon.a (mon.0) 479 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 29/627 objects degraded (4.625%), 9 pgs degraded) 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:23 vm07 bash[56315]: cluster 2026-03-09T14:41:22.706137+0000 mon.a (mon.0) 479 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 29/627 objects degraded (4.625%), 9 pgs degraded) 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:23 vm07 bash[56315]: cluster 2026-03-09T14:41:22.706157+0000 mon.a (mon.0) 480 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:23 vm07 bash[56315]: cluster 2026-03-09T14:41:22.706157+0000 mon.a (mon.0) 480 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:23 vm07 bash[55244]: cluster 2026-03-09T14:41:22.555609+0000 mgr.y (mgr.44103) 191 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:23 vm07 bash[55244]: cluster 2026-03-09T14:41:22.555609+0000 mgr.y (mgr.44103) 191 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:23 vm07 bash[55244]: audit 2026-03-09T14:41:22.579017+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:23 vm07 bash[55244]: audit 2026-03-09T14:41:22.579017+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:23 vm07 bash[55244]: audit 2026-03-09T14:41:22.579869+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:23 vm07 bash[55244]: audit 2026-03-09T14:41:22.579869+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:23 vm07 bash[55244]: cluster 2026-03-09T14:41:22.706137+0000 mon.a (mon.0) 479 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 29/627 objects degraded (4.625%), 9 pgs degraded) 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:23 vm07 bash[55244]: cluster 2026-03-09T14:41:22.706137+0000 mon.a (mon.0) 479 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 29/627 objects degraded (4.625%), 9 pgs degraded) 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:23 vm07 bash[55244]: cluster 2026-03-09T14:41:22.706157+0000 mon.a (mon.0) 480 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:23 vm07 bash[55244]: cluster 2026-03-09T14:41:22.706157+0000 mon.a (mon.0) 480 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:23.904 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:41:23 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:41:23] "GET /metrics HTTP/1.1" 200 38111 "" "Prometheus/2.51.0" 2026-03-09T14:41:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:23 vm11 bash[43577]: cluster 2026-03-09T14:41:22.555609+0000 mgr.y (mgr.44103) 191 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:23 vm11 bash[43577]: cluster 2026-03-09T14:41:22.555609+0000 mgr.y (mgr.44103) 191 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:23 vm11 bash[43577]: audit 2026-03-09T14:41:22.579017+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:23 vm11 bash[43577]: audit 2026-03-09T14:41:22.579017+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:23 vm11 bash[43577]: audit 2026-03-09T14:41:22.579869+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:23 vm11 bash[43577]: audit 2026-03-09T14:41:22.579869+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:23 vm11 bash[43577]: cluster 2026-03-09T14:41:22.706137+0000 mon.a (mon.0) 479 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 29/627 objects degraded (4.625%), 9 pgs degraded) 2026-03-09T14:41:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:23 vm11 bash[43577]: cluster 2026-03-09T14:41:22.706137+0000 mon.a (mon.0) 479 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 29/627 objects degraded (4.625%), 9 pgs degraded) 2026-03-09T14:41:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:23 vm11 bash[43577]: cluster 2026-03-09T14:41:22.706157+0000 mon.a (mon.0) 480 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:24.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:23 vm11 bash[43577]: cluster 2026-03-09T14:41:22.706157+0000 mon.a (mon.0) 480 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:24.502 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:24 vm11 bash[41290]: ts=2026-03-09T14:41:24.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.7\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.7\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.7\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: cluster 2026-03-09T14:41:24.556145+0000 mgr.y (mgr.44103) 192 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: cluster 2026-03-09T14:41:24.556145+0000 mgr.y (mgr.44103) 192 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.639035+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.639035+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.643904+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.643904+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.645398+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.645398+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.646206+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.646206+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.650588+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.650588+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.692215+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:25.643 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.692215+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.693881+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.693881+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.695102+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.695102+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.696147+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.696147+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.697319+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.697319+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.697497+0000 mgr.y (mgr.44103) 193 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:24.697497+0000 mgr.y (mgr.44103) 193 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: cephadm 2026-03-09T14:41:24.698269+0000 mgr.y (mgr.44103) 194 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: cephadm 2026-03-09T14:41:24.698269+0000 mgr.y (mgr.44103) 194 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: cephadm 2026-03-09T14:41:25.099821+0000 mgr.y (mgr.44103) 195 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: cephadm 2026-03-09T14:41:25.099821+0000 mgr.y (mgr.44103) 195 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:25.104269+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:25.104269+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:25.107988+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:25.107988+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:25.109403+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: audit 2026-03-09T14:41:25.109403+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: cephadm 2026-03-09T14:41:25.110926+0000 mgr.y (mgr.44103) 196 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-09T14:41:25.644 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 bash[43577]: cephadm 2026-03-09T14:41:25.110926+0000 mgr.y (mgr.44103) 196 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: cluster 2026-03-09T14:41:24.556145+0000 mgr.y (mgr.44103) 192 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: cluster 2026-03-09T14:41:24.556145+0000 mgr.y (mgr.44103) 192 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.639035+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.639035+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.643904+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.643904+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.645398+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.645398+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.646206+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.646206+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.650588+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.650588+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.692215+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.692215+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.693881+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.693881+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.695102+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.695102+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.696147+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.696147+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: cluster 2026-03-09T14:41:24.556145+0000 mgr.y (mgr.44103) 192 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: cluster 2026-03-09T14:41:24.556145+0000 mgr.y (mgr.44103) 192 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.639035+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.639035+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.643904+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.643904+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.645398+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.645398+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.646206+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.646206+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.650588+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.650588+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.692215+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.692215+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.693881+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.693881+0000 mon.a (mon.0) 487 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.695102+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.695102+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.696147+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.696147+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.697319+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.697319+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.697497+0000 mgr.y (mgr.44103) 193 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:24.697497+0000 mgr.y (mgr.44103) 193 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: cephadm 2026-03-09T14:41:24.698269+0000 mgr.y (mgr.44103) 194 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: cephadm 2026-03-09T14:41:24.698269+0000 mgr.y (mgr.44103) 194 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: cephadm 2026-03-09T14:41:25.099821+0000 mgr.y (mgr.44103) 195 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: cephadm 2026-03-09T14:41:25.099821+0000 mgr.y (mgr.44103) 195 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:25.104269+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:25.104269+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:25.107988+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:25.107988+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:25.109403+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: audit 2026-03-09T14:41:25.109403+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: cephadm 2026-03-09T14:41:25.110926+0000 mgr.y (mgr.44103) 196 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:25 vm07 bash[55244]: cephadm 2026-03-09T14:41:25.110926+0000 mgr.y (mgr.44103) 196 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.697319+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.697319+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.697497+0000 mgr.y (mgr.44103) 193 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:24.697497+0000 mgr.y (mgr.44103) 193 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: cephadm 2026-03-09T14:41:24.698269+0000 mgr.y (mgr.44103) 194 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: cephadm 2026-03-09T14:41:24.698269+0000 mgr.y (mgr.44103) 194 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: cephadm 2026-03-09T14:41:25.099821+0000 mgr.y (mgr.44103) 195 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: cephadm 2026-03-09T14:41:25.099821+0000 mgr.y (mgr.44103) 195 : cephadm [INF] Upgrade: Updating osd.7 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:25.104269+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:25.104269+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:25.107988+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:25.107988+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:25.109403+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: audit 2026-03-09T14:41:25.109403+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: cephadm 2026-03-09T14:41:25.110926+0000 mgr.y (mgr.44103) 196 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-09T14:41:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:25 vm07 bash[56315]: cephadm 2026-03-09T14:41:25.110926+0000 mgr.y (mgr.44103) 196 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-09T14:41:25.909 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:25 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:25.909 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:25 vm11 systemd[1]: Stopping Ceph osd.7 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:41:25.909 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:41:25 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:25.909 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:25 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:25.909 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:25 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:25.909 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:41:25 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:25.909 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:41:25 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:25.909 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:41:25 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:25.909 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:25 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:25.909 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:41:25 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:26.252 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:25 vm11 bash[30285]: debug 2026-03-09T14:41:25.911+0000 7fb5c65b8700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:41:26.252 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:25 vm11 bash[30285]: debug 2026-03-09T14:41:25.911+0000 7fb5c65b8700 -1 osd.7 127 *** Got signal Terminated *** 2026-03-09T14:41:26.253 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:25 vm11 bash[30285]: debug 2026-03-09T14:41:25.911+0000 7fb5c65b8700 -1 osd.7 127 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:41:26.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:26 vm07 bash[55244]: cluster 2026-03-09T14:41:25.918940+0000 mon.a (mon.0) 494 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T14:41:26.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:26 vm07 bash[55244]: cluster 2026-03-09T14:41:25.918940+0000 mon.a (mon.0) 494 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T14:41:26.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:26 vm07 bash[56315]: cluster 2026-03-09T14:41:25.918940+0000 mon.a (mon.0) 494 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T14:41:26.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:26 vm07 bash[56315]: cluster 2026-03-09T14:41:25.918940+0000 mon.a (mon.0) 494 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T14:41:26.944 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:26 vm11 bash[51294]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-7 2026-03-09T14:41:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:26 vm11 bash[43577]: cluster 2026-03-09T14:41:25.918940+0000 mon.a (mon.0) 494 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T14:41:26.944 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:26 vm11 bash[43577]: cluster 2026-03-09T14:41:25.918940+0000 mon.a (mon.0) 494 : cluster [INF] osd.7 marked itself down and dead 2026-03-09T14:41:27.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:27.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:26 vm11 bash[41290]: ts=2026-03-09T14:41:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:27.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:27.254 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:27.254 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:27.254 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:27.254 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:27.254 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:27.254 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.7.service: Deactivated successfully. 2026-03-09T14:41:27.254 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: Stopped Ceph osd.7 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:41:27.254 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:27.254 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: Started Ceph osd.7 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:41:27.254 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:41:27 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:27.644 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:27 vm11 bash[51506]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:41:27.644 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:27 vm11 bash[51506]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: cluster 2026-03-09T14:41:26.556621+0000 mgr.y (mgr.44103) 197 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: cluster 2026-03-09T14:41:26.556621+0000 mgr.y (mgr.44103) 197 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: cluster 2026-03-09T14:41:26.649578+0000 mon.a (mon.0) 495 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: cluster 2026-03-09T14:41:26.649578+0000 mon.a (mon.0) 495 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: cluster 2026-03-09T14:41:26.649601+0000 mon.a (mon.0) 496 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: cluster 2026-03-09T14:41:26.649601+0000 mon.a (mon.0) 496 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: cluster 2026-03-09T14:41:26.672170+0000 mon.a (mon.0) 497 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: cluster 2026-03-09T14:41:26.672170+0000 mon.a (mon.0) 497 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: audit 2026-03-09T14:41:27.291022+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: audit 2026-03-09T14:41:27.291022+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: audit 2026-03-09T14:41:27.296570+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: audit 2026-03-09T14:41:27.296570+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: audit 2026-03-09T14:41:27.641937+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:27 vm07 bash[56315]: audit 2026-03-09T14:41:27.641937+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: cluster 2026-03-09T14:41:26.556621+0000 mgr.y (mgr.44103) 197 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: cluster 2026-03-09T14:41:26.556621+0000 mgr.y (mgr.44103) 197 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: cluster 2026-03-09T14:41:26.649578+0000 mon.a (mon.0) 495 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: cluster 2026-03-09T14:41:26.649578+0000 mon.a (mon.0) 495 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: cluster 2026-03-09T14:41:26.649601+0000 mon.a (mon.0) 496 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: cluster 2026-03-09T14:41:26.649601+0000 mon.a (mon.0) 496 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: cluster 2026-03-09T14:41:26.672170+0000 mon.a (mon.0) 497 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-09T14:41:27.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: cluster 2026-03-09T14:41:26.672170+0000 mon.a (mon.0) 497 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-09T14:41:27.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: audit 2026-03-09T14:41:27.291022+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: audit 2026-03-09T14:41:27.291022+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: audit 2026-03-09T14:41:27.296570+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: audit 2026-03-09T14:41:27.296570+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: audit 2026-03-09T14:41:27.641937+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:27.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:27 vm07 bash[55244]: audit 2026-03-09T14:41:27.641937+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: cluster 2026-03-09T14:41:26.556621+0000 mgr.y (mgr.44103) 197 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: cluster 2026-03-09T14:41:26.556621+0000 mgr.y (mgr.44103) 197 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 516 B/s rd, 0 op/s 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: cluster 2026-03-09T14:41:26.649578+0000 mon.a (mon.0) 495 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: cluster 2026-03-09T14:41:26.649578+0000 mon.a (mon.0) 495 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: cluster 2026-03-09T14:41:26.649601+0000 mon.a (mon.0) 496 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: cluster 2026-03-09T14:41:26.649601+0000 mon.a (mon.0) 496 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: cluster 2026-03-09T14:41:26.672170+0000 mon.a (mon.0) 497 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: cluster 2026-03-09T14:41:26.672170+0000 mon.a (mon.0) 497 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: audit 2026-03-09T14:41:27.291022+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: audit 2026-03-09T14:41:27.291022+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: audit 2026-03-09T14:41:27.296570+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: audit 2026-03-09T14:41:27.296570+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: audit 2026-03-09T14:41:27.641937+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:28.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:27 vm11 bash[43577]: audit 2026-03-09T14:41:27.641937+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:28.661 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51506]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-09T14:41:28.661 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51506]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:41:28.661 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51506]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-09T14:41:28.661 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51506]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-03-09T14:41:28.661 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51506]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-7bc30442-d00e-4d3d-a6bb-e0b7c1e91e44/osd-block-abdf6bc5-5826-4388-bb2b-2d627c14c61b --path /var/lib/ceph/osd/ceph-7 --no-mon-config 2026-03-09T14:41:29.002 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51506]: Running command: /usr/bin/ln -snf /dev/ceph-7bc30442-d00e-4d3d-a6bb-e0b7c1e91e44/osd-block-abdf6bc5-5826-4388-bb2b-2d627c14c61b /var/lib/ceph/osd/ceph-7/block 2026-03-09T14:41:29.002 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51506]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-7/block 2026-03-09T14:41:29.002 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51506]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3 2026-03-09T14:41:29.002 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51506]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-03-09T14:41:29.002 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51506]: --> ceph-volume lvm activate successful for osd ID: 7 2026-03-09T14:41:29.002 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:28 vm11 bash[51863]: debug 2026-03-09T14:41:28.907+0000 7f9c1fc75640 1 -- 192.168.123.111:0/666426029 <== mon.2 v2:192.168.123.111:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x5562dc545680 con 0x5562db753c00 2026-03-09T14:41:29.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:28 vm11 bash[43577]: audit 2026-03-09T14:41:27.558146+0000 mgr.y (mgr.44103) 198 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:29.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:28 vm11 bash[43577]: audit 2026-03-09T14:41:27.558146+0000 mgr.y (mgr.44103) 198 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:29.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:28 vm11 bash[43577]: cluster 2026-03-09T14:41:27.686540+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-09T14:41:29.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:28 vm11 bash[43577]: cluster 2026-03-09T14:41:27.686540+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-09T14:41:29.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:28 vm07 bash[56315]: audit 2026-03-09T14:41:27.558146+0000 mgr.y (mgr.44103) 198 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:29.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:28 vm07 bash[56315]: audit 2026-03-09T14:41:27.558146+0000 mgr.y (mgr.44103) 198 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:29.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:28 vm07 bash[56315]: cluster 2026-03-09T14:41:27.686540+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-09T14:41:29.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:28 vm07 bash[56315]: cluster 2026-03-09T14:41:27.686540+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-09T14:41:29.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:28 vm07 bash[55244]: audit 2026-03-09T14:41:27.558146+0000 mgr.y (mgr.44103) 198 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:29.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:28 vm07 bash[55244]: audit 2026-03-09T14:41:27.558146+0000 mgr.y (mgr.44103) 198 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:29.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:28 vm07 bash[55244]: cluster 2026-03-09T14:41:27.686540+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-09T14:41:29.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:28 vm07 bash[55244]: cluster 2026-03-09T14:41:27.686540+0000 mon.a (mon.0) 501 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-09T14:41:30.002 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:29 vm11 bash[51863]: debug 2026-03-09T14:41:29.583+0000 7f9c224df740 -1 Falling back to public interface 2026-03-09T14:41:30.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:29 vm11 bash[43577]: cluster 2026-03-09T14:41:28.557042+0000 mgr.y (mgr.44103) 199 : cluster [DBG] pgmap v118: 161 pgs: 8 active+undersized, 21 stale+active+clean, 5 active+undersized+degraded, 127 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 18/627 objects degraded (2.871%) 2026-03-09T14:41:30.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:29 vm11 bash[43577]: cluster 2026-03-09T14:41:28.557042+0000 mgr.y (mgr.44103) 199 : cluster [DBG] pgmap v118: 161 pgs: 8 active+undersized, 21 stale+active+clean, 5 active+undersized+degraded, 127 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 18/627 objects degraded (2.871%) 2026-03-09T14:41:30.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:29 vm11 bash[43577]: cluster 2026-03-09T14:41:28.661916+0000 mon.a (mon.0) 502 : cluster [WRN] Health check failed: Degraded data redundancy: 18/627 objects degraded (2.871%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:30.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:29 vm11 bash[43577]: cluster 2026-03-09T14:41:28.661916+0000 mon.a (mon.0) 502 : cluster [WRN] Health check failed: Degraded data redundancy: 18/627 objects degraded (2.871%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:30.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:29 vm07 bash[55244]: cluster 2026-03-09T14:41:28.557042+0000 mgr.y (mgr.44103) 199 : cluster [DBG] pgmap v118: 161 pgs: 8 active+undersized, 21 stale+active+clean, 5 active+undersized+degraded, 127 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 18/627 objects degraded (2.871%) 2026-03-09T14:41:30.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:29 vm07 bash[55244]: cluster 2026-03-09T14:41:28.557042+0000 mgr.y (mgr.44103) 199 : cluster [DBG] pgmap v118: 161 pgs: 8 active+undersized, 21 stale+active+clean, 5 active+undersized+degraded, 127 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 18/627 objects degraded (2.871%) 2026-03-09T14:41:30.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:29 vm07 bash[55244]: cluster 2026-03-09T14:41:28.661916+0000 mon.a (mon.0) 502 : cluster [WRN] Health check failed: Degraded data redundancy: 18/627 objects degraded (2.871%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:30.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:29 vm07 bash[55244]: cluster 2026-03-09T14:41:28.661916+0000 mon.a (mon.0) 502 : cluster [WRN] Health check failed: Degraded data redundancy: 18/627 objects degraded (2.871%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:30.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:29 vm07 bash[56315]: cluster 2026-03-09T14:41:28.557042+0000 mgr.y (mgr.44103) 199 : cluster [DBG] pgmap v118: 161 pgs: 8 active+undersized, 21 stale+active+clean, 5 active+undersized+degraded, 127 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 18/627 objects degraded (2.871%) 2026-03-09T14:41:30.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:29 vm07 bash[56315]: cluster 2026-03-09T14:41:28.557042+0000 mgr.y (mgr.44103) 199 : cluster [DBG] pgmap v118: 161 pgs: 8 active+undersized, 21 stale+active+clean, 5 active+undersized+degraded, 127 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 18/627 objects degraded (2.871%) 2026-03-09T14:41:30.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:29 vm07 bash[56315]: cluster 2026-03-09T14:41:28.661916+0000 mon.a (mon.0) 502 : cluster [WRN] Health check failed: Degraded data redundancy: 18/627 objects degraded (2.871%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:30.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:29 vm07 bash[56315]: cluster 2026-03-09T14:41:28.661916+0000 mon.a (mon.0) 502 : cluster [WRN] Health check failed: Degraded data redundancy: 18/627 objects degraded (2.871%), 5 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:31.002 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:30 vm11 bash[51863]: debug 2026-03-09T14:41:30.552+0000 7f9c224df740 -1 osd.7 0 read_superblock omap replica is missing. 2026-03-09T14:41:31.002 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:30 vm11 bash[51863]: debug 2026-03-09T14:41:30.564+0000 7f9c224df740 -1 osd.7 127 log_to_monitors true 2026-03-09T14:41:31.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:30 vm11 bash[43577]: audit 2026-03-09T14:41:30.569851+0000 mon.b (mon.2) 9 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:30 vm11 bash[43577]: audit 2026-03-09T14:41:30.569851+0000 mon.b (mon.2) 9 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:30 vm11 bash[43577]: audit 2026-03-09T14:41:30.575041+0000 mon.a (mon.0) 503 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:30 vm11 bash[43577]: audit 2026-03-09T14:41:30.575041+0000 mon.a (mon.0) 503 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:30 vm07 bash[55244]: audit 2026-03-09T14:41:30.569851+0000 mon.b (mon.2) 9 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:30 vm07 bash[55244]: audit 2026-03-09T14:41:30.569851+0000 mon.b (mon.2) 9 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:30 vm07 bash[55244]: audit 2026-03-09T14:41:30.575041+0000 mon.a (mon.0) 503 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:30 vm07 bash[55244]: audit 2026-03-09T14:41:30.575041+0000 mon.a (mon.0) 503 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:30 vm07 bash[56315]: audit 2026-03-09T14:41:30.569851+0000 mon.b (mon.2) 9 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:30 vm07 bash[56315]: audit 2026-03-09T14:41:30.569851+0000 mon.b (mon.2) 9 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:30 vm07 bash[56315]: audit 2026-03-09T14:41:30.575041+0000 mon.a (mon.0) 503 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:30 vm07 bash[56315]: audit 2026-03-09T14:41:30.575041+0000 mon.a (mon.0) 503 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:41:31.195 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (3m) 65s ago 8m 13.6M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (3m) 14s ago 8m 38.8M - dad864ee21e9 614f6a00be7a 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (3m) 65s ago 8m 43.0M - 3.5 e1d6a67b021e e3b30dab288c 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 running (3m) 14s ago 11m 465M - 19.2.3-678-ge911bdeb 654f31e6858e d35dddd392d1 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (3m) 65s ago 12m 528M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (2m) 65s ago 12m 44.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bcdaa5dfc948 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (2m) 14s ago 11m 39.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1caba9bf8a13 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (2m) 65s ago 11m 42.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ff7dfe3a6c7c 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (3m) 65s ago 9m 7591k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (3m) 14s ago 9m 7671k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (86s) 65s ago 11m 45.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 24632814894d 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (70s) 65s ago 10m 31.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1f773b5d0f68 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (103s) 65s ago 10m 65.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7d943c2f091c 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (2m) 65s ago 10m 48.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7c234b83449a 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (53s) 14s ago 10m 47.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 811379ab4ba5 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (35s) 14s ago 9m 66.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e bc7e71aa5718 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (19s) 14s ago 9m 12.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 20bc2716b966 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 starting - - - 4096M 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (3m) 14s ago 8m 40.5M - 2.51.0 1d3b7f56885b e88f0339687c 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (8m) 65s ago 8m 85.8M - 17.2.0 e1d6a67b021e 765128ae03a3 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (8m) 14s ago 8m 85.2M - 17.2.0 e1d6a67b021e 33917711cfd6 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (8m) 65s ago 8m 85.3M - 17.2.0 e1d6a67b021e 377fed84fff0 2026-03-09T14:41:31.608 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (8m) 14s ago 8m 85.3M - 17.2.0 e1d6a67b021e 90ec06d07cd4 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 4, 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 13 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:41:31.893 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:41:32.002 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:31 vm11 bash[51863]: debug 2026-03-09T14:41:31.520+0000 7f9c19a89640 -1 osd.7 127 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: cluster 2026-03-09T14:41:30.557413+0000 mgr.y (mgr.44103) 200 : cluster [DBG] pgmap v119: 161 pgs: 16 active+undersized, 17 stale+active+clean, 12 active+undersized+degraded, 116 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 36/627 objects degraded (5.742%) 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: cluster 2026-03-09T14:41:30.557413+0000 mgr.y (mgr.44103) 200 : cluster [DBG] pgmap v119: 161 pgs: 16 active+undersized, 17 stale+active+clean, 12 active+undersized+degraded, 116 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 36/627 objects degraded (5.742%) 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: audit 2026-03-09T14:41:30.682072+0000 mon.a (mon.0) 504 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: audit 2026-03-09T14:41:30.682072+0000 mon.a (mon.0) 504 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: cluster 2026-03-09T14:41:30.683915+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: cluster 2026-03-09T14:41:30.683915+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: audit 2026-03-09T14:41:30.684033+0000 mon.b (mon.2) 10 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: audit 2026-03-09T14:41:30.684033+0000 mon.b (mon.2) 10 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: audit 2026-03-09T14:41:30.689226+0000 mon.a (mon.0) 506 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: audit 2026-03-09T14:41:30.689226+0000 mon.a (mon.0) 506 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: audit 2026-03-09T14:41:31.192922+0000 mgr.y (mgr.44103) 201 : audit [DBG] from='client.34297 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: audit 2026-03-09T14:41:31.192922+0000 mgr.y (mgr.44103) 201 : audit [DBG] from='client.34297 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: audit 2026-03-09T14:41:31.409449+0000 mgr.y (mgr.44103) 202 : audit [DBG] from='client.44313 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:31 vm11 bash[43577]: audit 2026-03-09T14:41:31.409449+0000 mgr.y (mgr.44103) 202 : audit [DBG] from='client.44313 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.090 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:41:32.090 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:41:32.090 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:41:32.090 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:41:32.090 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T14:41:32.090 INFO:teuthology.orchestra.run.vm07.stdout: "mgr", 2026-03-09T14:41:32.091 INFO:teuthology.orchestra.run.vm07.stdout: "mon" 2026-03-09T14:41:32.091 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:41:32.091 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "12/23 daemons upgraded", 2026-03-09T14:41:32.091 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading osd daemons", 2026-03-09T14:41:32.091 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:41:32.091 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: cluster 2026-03-09T14:41:30.557413+0000 mgr.y (mgr.44103) 200 : cluster [DBG] pgmap v119: 161 pgs: 16 active+undersized, 17 stale+active+clean, 12 active+undersized+degraded, 116 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 36/627 objects degraded (5.742%) 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: cluster 2026-03-09T14:41:30.557413+0000 mgr.y (mgr.44103) 200 : cluster [DBG] pgmap v119: 161 pgs: 16 active+undersized, 17 stale+active+clean, 12 active+undersized+degraded, 116 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 36/627 objects degraded (5.742%) 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: audit 2026-03-09T14:41:30.682072+0000 mon.a (mon.0) 504 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: audit 2026-03-09T14:41:30.682072+0000 mon.a (mon.0) 504 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: cluster 2026-03-09T14:41:30.683915+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: cluster 2026-03-09T14:41:30.683915+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: audit 2026-03-09T14:41:30.684033+0000 mon.b (mon.2) 10 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: audit 2026-03-09T14:41:30.684033+0000 mon.b (mon.2) 10 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: audit 2026-03-09T14:41:30.689226+0000 mon.a (mon.0) 506 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: audit 2026-03-09T14:41:30.689226+0000 mon.a (mon.0) 506 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: audit 2026-03-09T14:41:31.192922+0000 mgr.y (mgr.44103) 201 : audit [DBG] from='client.34297 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: audit 2026-03-09T14:41:31.192922+0000 mgr.y (mgr.44103) 201 : audit [DBG] from='client.34297 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: audit 2026-03-09T14:41:31.409449+0000 mgr.y (mgr.44103) 202 : audit [DBG] from='client.44313 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:31 vm07 bash[56315]: audit 2026-03-09T14:41:31.409449+0000 mgr.y (mgr.44103) 202 : audit [DBG] from='client.44313 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: cluster 2026-03-09T14:41:30.557413+0000 mgr.y (mgr.44103) 200 : cluster [DBG] pgmap v119: 161 pgs: 16 active+undersized, 17 stale+active+clean, 12 active+undersized+degraded, 116 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 36/627 objects degraded (5.742%) 2026-03-09T14:41:32.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: cluster 2026-03-09T14:41:30.557413+0000 mgr.y (mgr.44103) 200 : cluster [DBG] pgmap v119: 161 pgs: 16 active+undersized, 17 stale+active+clean, 12 active+undersized+degraded, 116 active+clean; 457 KiB data, 244 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 36/627 objects degraded (5.742%) 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: audit 2026-03-09T14:41:30.682072+0000 mon.a (mon.0) 504 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: audit 2026-03-09T14:41:30.682072+0000 mon.a (mon.0) 504 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: cluster 2026-03-09T14:41:30.683915+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: cluster 2026-03-09T14:41:30.683915+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: audit 2026-03-09T14:41:30.684033+0000 mon.b (mon.2) 10 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: audit 2026-03-09T14:41:30.684033+0000 mon.b (mon.2) 10 : audit [INF] from='osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: audit 2026-03-09T14:41:30.689226+0000 mon.a (mon.0) 506 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: audit 2026-03-09T14:41:30.689226+0000 mon.a (mon.0) 506 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: audit 2026-03-09T14:41:31.192922+0000 mgr.y (mgr.44103) 201 : audit [DBG] from='client.34297 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: audit 2026-03-09T14:41:31.192922+0000 mgr.y (mgr.44103) 201 : audit [DBG] from='client.34297 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: audit 2026-03-09T14:41:31.409449+0000 mgr.y (mgr.44103) 202 : audit [DBG] from='client.44313 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:31 vm07 bash[55244]: audit 2026-03-09T14:41:31.409449+0000 mgr.y (mgr.44103) 202 : audit [DBG] from='client.44313 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_WARN all OSDs are running squid or later but require_osd_release < squid; Degraded data redundancy: 36/627 objects degraded (5.742%), 12 pgs degraded 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout:[WRN] OSD_UPGRADE_FINISHED: all OSDs are running squid or later but require_osd_release < squid 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: all OSDs are running squid or later but require_osd_release < squid 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout:[WRN] PG_DEGRADED: Degraded data redundancy: 36/627 objects degraded (5.742%), 12 pgs degraded 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.4 is active+undersized+degraded, acting [1,0] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.9 is active+undersized+degraded, acting [1,3] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.a is active+undersized+degraded, acting [1,3] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 2.12 is active+undersized+degraded, acting [5,3] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.7 is active+undersized+degraded, acting [3,0] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.8 is active+undersized+degraded, acting [3,1] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.d is active+undersized+degraded, acting [5,6] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.15 is active+undersized+degraded, acting [3,4] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 3.16 is active+undersized+degraded, acting [5,1] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 4.0 is active+undersized+degraded, acting [3,0] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 4.14 is active+undersized+degraded, acting [3,1] 2026-03-09T14:41:32.328 INFO:teuthology.orchestra.run.vm07.stdout: pg 4.15 is active+undersized+degraded, acting [5,3] 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: audit 2026-03-09T14:41:31.612785+0000 mgr.y (mgr.44103) 203 : audit [DBG] from='client.34309 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: audit 2026-03-09T14:41:31.612785+0000 mgr.y (mgr.44103) 203 : audit [DBG] from='client.34309 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: cluster 2026-03-09T14:41:31.683422+0000 mon.a (mon.0) 507 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: cluster 2026-03-09T14:41:31.683422+0000 mon.a (mon.0) 507 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: cluster 2026-03-09T14:41:31.701489+0000 mon.a (mon.0) 508 : cluster [INF] osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635] boot 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: cluster 2026-03-09T14:41:31.701489+0000 mon.a (mon.0) 508 : cluster [INF] osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635] boot 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: cluster 2026-03-09T14:41:31.701513+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: cluster 2026-03-09T14:41:31.701513+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: audit 2026-03-09T14:41:31.701766+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: audit 2026-03-09T14:41:31.701766+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: audit 2026-03-09T14:41:31.901739+0000 mon.a (mon.0) 511 : audit [DBG] from='client.? 192.168.123.107:0/814042503' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: audit 2026-03-09T14:41:31.901739+0000 mon.a (mon.0) 511 : audit [DBG] from='client.? 192.168.123.107:0/814042503' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: audit 2026-03-09T14:41:32.099601+0000 mgr.y (mgr.44103) 204 : audit [DBG] from='client.44328 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: audit 2026-03-09T14:41:32.099601+0000 mgr.y (mgr.44103) 204 : audit [DBG] from='client.44328 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: audit 2026-03-09T14:41:32.337110+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.107:0/2333454986' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:33.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:32 vm11 bash[43577]: audit 2026-03-09T14:41:32.337110+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.107:0/2333454986' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: audit 2026-03-09T14:41:31.612785+0000 mgr.y (mgr.44103) 203 : audit [DBG] from='client.34309 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: audit 2026-03-09T14:41:31.612785+0000 mgr.y (mgr.44103) 203 : audit [DBG] from='client.34309 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: cluster 2026-03-09T14:41:31.683422+0000 mon.a (mon.0) 507 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: cluster 2026-03-09T14:41:31.683422+0000 mon.a (mon.0) 507 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: cluster 2026-03-09T14:41:31.701489+0000 mon.a (mon.0) 508 : cluster [INF] osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635] boot 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: cluster 2026-03-09T14:41:31.701489+0000 mon.a (mon.0) 508 : cluster [INF] osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635] boot 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: cluster 2026-03-09T14:41:31.701513+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: cluster 2026-03-09T14:41:31.701513+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: audit 2026-03-09T14:41:31.701766+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: audit 2026-03-09T14:41:31.701766+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: audit 2026-03-09T14:41:31.901739+0000 mon.a (mon.0) 511 : audit [DBG] from='client.? 192.168.123.107:0/814042503' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: audit 2026-03-09T14:41:31.901739+0000 mon.a (mon.0) 511 : audit [DBG] from='client.? 192.168.123.107:0/814042503' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: audit 2026-03-09T14:41:32.099601+0000 mgr.y (mgr.44103) 204 : audit [DBG] from='client.44328 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: audit 2026-03-09T14:41:32.099601+0000 mgr.y (mgr.44103) 204 : audit [DBG] from='client.44328 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: audit 2026-03-09T14:41:32.337110+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.107:0/2333454986' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:32 vm07 bash[56315]: audit 2026-03-09T14:41:32.337110+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.107:0/2333454986' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: audit 2026-03-09T14:41:31.612785+0000 mgr.y (mgr.44103) 203 : audit [DBG] from='client.34309 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: audit 2026-03-09T14:41:31.612785+0000 mgr.y (mgr.44103) 203 : audit [DBG] from='client.34309 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: cluster 2026-03-09T14:41:31.683422+0000 mon.a (mon.0) 507 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: cluster 2026-03-09T14:41:31.683422+0000 mon.a (mon.0) 507 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: cluster 2026-03-09T14:41:31.701489+0000 mon.a (mon.0) 508 : cluster [INF] osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635] boot 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: cluster 2026-03-09T14:41:31.701489+0000 mon.a (mon.0) 508 : cluster [INF] osd.7 [v2:192.168.123.111:6824/3925749635,v1:192.168.123.111:6825/3925749635] boot 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: cluster 2026-03-09T14:41:31.701513+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: cluster 2026-03-09T14:41:31.701513+0000 mon.a (mon.0) 509 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: audit 2026-03-09T14:41:31.701766+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: audit 2026-03-09T14:41:31.701766+0000 mon.a (mon.0) 510 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: audit 2026-03-09T14:41:31.901739+0000 mon.a (mon.0) 511 : audit [DBG] from='client.? 192.168.123.107:0/814042503' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: audit 2026-03-09T14:41:31.901739+0000 mon.a (mon.0) 511 : audit [DBG] from='client.? 192.168.123.107:0/814042503' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: audit 2026-03-09T14:41:32.099601+0000 mgr.y (mgr.44103) 204 : audit [DBG] from='client.44328 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: audit 2026-03-09T14:41:32.099601+0000 mgr.y (mgr.44103) 204 : audit [DBG] from='client.44328 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: audit 2026-03-09T14:41:32.337110+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.107:0/2333454986' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:33.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:32 vm07 bash[55244]: audit 2026-03-09T14:41:32.337110+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.107:0/2333454986' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:41:33.705 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:41:33 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:41:33] "GET /metrics HTTP/1.1" 200 38111 "" "Prometheus/2.51.0" 2026-03-09T14:41:33.708 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:33 vm11 bash[43577]: cluster 2026-03-09T14:41:31.505277+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 27646.566660 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:33.708 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:33 vm11 bash[43577]: cluster 2026-03-09T14:41:31.505277+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 27646.566660 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:33.708 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:33 vm11 bash[43577]: cluster 2026-03-09T14:41:32.557748+0000 mgr.y (mgr.44103) 205 : cluster [DBG] pgmap v122: 161 pgs: 45 active+undersized, 24 active+undersized+degraded, 92 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 80/627 objects degraded (12.759%) 2026-03-09T14:41:33.708 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:33 vm11 bash[43577]: cluster 2026-03-09T14:41:32.557748+0000 mgr.y (mgr.44103) 205 : cluster [DBG] pgmap v122: 161 pgs: 45 active+undersized, 24 active+undersized+degraded, 92 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 80/627 objects degraded (12.759%) 2026-03-09T14:41:33.708 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:33 vm11 bash[43577]: cluster 2026-03-09T14:41:32.715210+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T14:41:33.708 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:33 vm11 bash[43577]: cluster 2026-03-09T14:41:32.715210+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T14:41:33.708 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:33 vm11 bash[43577]: audit 2026-03-09T14:41:33.624067+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:33.708 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:33 vm11 bash[43577]: audit 2026-03-09T14:41:33.624067+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:33.708 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:33 vm11 bash[43577]: audit 2026-03-09T14:41:33.628987+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:33.708 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:33 vm11 bash[43577]: audit 2026-03-09T14:41:33.628987+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:33 vm07 bash[56315]: cluster 2026-03-09T14:41:31.505277+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 27646.566660 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:33 vm07 bash[56315]: cluster 2026-03-09T14:41:31.505277+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 27646.566660 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:33 vm07 bash[56315]: cluster 2026-03-09T14:41:32.557748+0000 mgr.y (mgr.44103) 205 : cluster [DBG] pgmap v122: 161 pgs: 45 active+undersized, 24 active+undersized+degraded, 92 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 80/627 objects degraded (12.759%) 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:33 vm07 bash[56315]: cluster 2026-03-09T14:41:32.557748+0000 mgr.y (mgr.44103) 205 : cluster [DBG] pgmap v122: 161 pgs: 45 active+undersized, 24 active+undersized+degraded, 92 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 80/627 objects degraded (12.759%) 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:33 vm07 bash[56315]: cluster 2026-03-09T14:41:32.715210+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:33 vm07 bash[56315]: cluster 2026-03-09T14:41:32.715210+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:33 vm07 bash[56315]: audit 2026-03-09T14:41:33.624067+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:33 vm07 bash[56315]: audit 2026-03-09T14:41:33.624067+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:33 vm07 bash[56315]: audit 2026-03-09T14:41:33.628987+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:33 vm07 bash[56315]: audit 2026-03-09T14:41:33.628987+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:33 vm07 bash[55244]: cluster 2026-03-09T14:41:31.505277+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 27646.566660 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:33 vm07 bash[55244]: cluster 2026-03-09T14:41:31.505277+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 27646.566660 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:33 vm07 bash[55244]: cluster 2026-03-09T14:41:32.557748+0000 mgr.y (mgr.44103) 205 : cluster [DBG] pgmap v122: 161 pgs: 45 active+undersized, 24 active+undersized+degraded, 92 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 80/627 objects degraded (12.759%) 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:33 vm07 bash[55244]: cluster 2026-03-09T14:41:32.557748+0000 mgr.y (mgr.44103) 205 : cluster [DBG] pgmap v122: 161 pgs: 45 active+undersized, 24 active+undersized+degraded, 92 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 80/627 objects degraded (12.759%) 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:33 vm07 bash[55244]: cluster 2026-03-09T14:41:32.715210+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:33 vm07 bash[55244]: cluster 2026-03-09T14:41:32.715210+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:33 vm07 bash[55244]: audit 2026-03-09T14:41:33.624067+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:33 vm07 bash[55244]: audit 2026-03-09T14:41:33.624067+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:33 vm07 bash[55244]: audit 2026-03-09T14:41:33.628987+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:34.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:33 vm07 bash[55244]: audit 2026-03-09T14:41:33.628987+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:34.502 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:34 vm11 bash[41290]: ts=2026-03-09T14:41:34.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.7\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.7\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.7\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.111\", device_class=\"hdd\", hostname=\"vm11\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.111\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:35.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:35 vm11 bash[43577]: audit 2026-03-09T14:41:34.195754+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:35 vm11 bash[43577]: audit 2026-03-09T14:41:34.195754+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:35 vm11 bash[43577]: audit 2026-03-09T14:41:34.205167+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:35 vm11 bash[43577]: audit 2026-03-09T14:41:34.205167+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:35 vm07 bash[55244]: audit 2026-03-09T14:41:34.195754+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:35 vm07 bash[55244]: audit 2026-03-09T14:41:34.195754+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:35 vm07 bash[55244]: audit 2026-03-09T14:41:34.205167+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:35 vm07 bash[55244]: audit 2026-03-09T14:41:34.205167+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:35 vm07 bash[56315]: audit 2026-03-09T14:41:34.195754+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:35 vm07 bash[56315]: audit 2026-03-09T14:41:34.195754+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:35 vm07 bash[56315]: audit 2026-03-09T14:41:34.205167+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:35.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:35 vm07 bash[56315]: audit 2026-03-09T14:41:34.205167+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:36.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:36 vm11 bash[43577]: cluster 2026-03-09T14:41:34.558196+0000 mgr.y (mgr.44103) 206 : cluster [DBG] pgmap v124: 161 pgs: 32 active+undersized, 14 active+undersized+degraded, 115 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 56/627 objects degraded (8.931%) 2026-03-09T14:41:36.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:36 vm11 bash[43577]: cluster 2026-03-09T14:41:34.558196+0000 mgr.y (mgr.44103) 206 : cluster [DBG] pgmap v124: 161 pgs: 32 active+undersized, 14 active+undersized+degraded, 115 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 56/627 objects degraded (8.931%) 2026-03-09T14:41:36.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:36 vm11 bash[43577]: cluster 2026-03-09T14:41:35.201424+0000 mon.a (mon.0) 517 : cluster [WRN] Health check update: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:36.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:36 vm11 bash[43577]: cluster 2026-03-09T14:41:35.201424+0000 mon.a (mon.0) 517 : cluster [WRN] Health check update: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:36.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:36 vm07 bash[55244]: cluster 2026-03-09T14:41:34.558196+0000 mgr.y (mgr.44103) 206 : cluster [DBG] pgmap v124: 161 pgs: 32 active+undersized, 14 active+undersized+degraded, 115 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 56/627 objects degraded (8.931%) 2026-03-09T14:41:36.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:36 vm07 bash[55244]: cluster 2026-03-09T14:41:34.558196+0000 mgr.y (mgr.44103) 206 : cluster [DBG] pgmap v124: 161 pgs: 32 active+undersized, 14 active+undersized+degraded, 115 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 56/627 objects degraded (8.931%) 2026-03-09T14:41:36.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:36 vm07 bash[55244]: cluster 2026-03-09T14:41:35.201424+0000 mon.a (mon.0) 517 : cluster [WRN] Health check update: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:36.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:36 vm07 bash[55244]: cluster 2026-03-09T14:41:35.201424+0000 mon.a (mon.0) 517 : cluster [WRN] Health check update: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:36.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:36 vm07 bash[56315]: cluster 2026-03-09T14:41:34.558196+0000 mgr.y (mgr.44103) 206 : cluster [DBG] pgmap v124: 161 pgs: 32 active+undersized, 14 active+undersized+degraded, 115 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 56/627 objects degraded (8.931%) 2026-03-09T14:41:36.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:36 vm07 bash[56315]: cluster 2026-03-09T14:41:34.558196+0000 mgr.y (mgr.44103) 206 : cluster [DBG] pgmap v124: 161 pgs: 32 active+undersized, 14 active+undersized+degraded, 115 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 56/627 objects degraded (8.931%) 2026-03-09T14:41:36.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:36 vm07 bash[56315]: cluster 2026-03-09T14:41:35.201424+0000 mon.a (mon.0) 517 : cluster [WRN] Health check update: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:36.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:36 vm07 bash[56315]: cluster 2026-03-09T14:41:35.201424+0000 mon.a (mon.0) 517 : cluster [WRN] Health check update: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded (PG_DEGRADED) 2026-03-09T14:41:37.242 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:36 vm11 bash[41290]: ts=2026-03-09T14:41:36.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:37.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:37 vm11 bash[43577]: cluster 2026-03-09T14:41:37.198657+0000 mon.a (mon.0) 518 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded) 2026-03-09T14:41:37.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:37 vm11 bash[43577]: cluster 2026-03-09T14:41:37.198657+0000 mon.a (mon.0) 518 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded) 2026-03-09T14:41:37.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:37 vm07 bash[55244]: cluster 2026-03-09T14:41:37.198657+0000 mon.a (mon.0) 518 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded) 2026-03-09T14:41:37.559 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:37 vm07 bash[55244]: cluster 2026-03-09T14:41:37.198657+0000 mon.a (mon.0) 518 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded) 2026-03-09T14:41:37.559 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:37 vm07 bash[56315]: cluster 2026-03-09T14:41:37.198657+0000 mon.a (mon.0) 518 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded) 2026-03-09T14:41:37.559 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:37 vm07 bash[56315]: cluster 2026-03-09T14:41:37.198657+0000 mon.a (mon.0) 518 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 56/627 objects degraded (8.931%), 14 pgs degraded) 2026-03-09T14:41:38.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:38 vm11 bash[43577]: cluster 2026-03-09T14:41:36.558543+0000 mgr.y (mgr.44103) 207 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:38.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:38 vm11 bash[43577]: cluster 2026-03-09T14:41:36.558543+0000 mgr.y (mgr.44103) 207 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:38.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:38 vm11 bash[43577]: audit 2026-03-09T14:41:37.582371+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:38.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:38 vm11 bash[43577]: audit 2026-03-09T14:41:37.582371+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:38.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:38 vm11 bash[43577]: audit 2026-03-09T14:41:37.583075+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:38.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:38 vm11 bash[43577]: audit 2026-03-09T14:41:37.583075+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:38 vm07 bash[55244]: cluster 2026-03-09T14:41:36.558543+0000 mgr.y (mgr.44103) 207 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:38 vm07 bash[55244]: cluster 2026-03-09T14:41:36.558543+0000 mgr.y (mgr.44103) 207 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:38 vm07 bash[55244]: audit 2026-03-09T14:41:37.582371+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:38 vm07 bash[55244]: audit 2026-03-09T14:41:37.582371+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:38 vm07 bash[55244]: audit 2026-03-09T14:41:37.583075+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:38 vm07 bash[55244]: audit 2026-03-09T14:41:37.583075+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:38 vm07 bash[56315]: cluster 2026-03-09T14:41:36.558543+0000 mgr.y (mgr.44103) 207 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:38 vm07 bash[56315]: cluster 2026-03-09T14:41:36.558543+0000 mgr.y (mgr.44103) 207 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:38 vm07 bash[56315]: audit 2026-03-09T14:41:37.582371+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:38 vm07 bash[56315]: audit 2026-03-09T14:41:37.582371+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:38 vm07 bash[56315]: audit 2026-03-09T14:41:37.583075+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:38.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:38 vm07 bash[56315]: audit 2026-03-09T14:41:37.583075+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:39.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:39 vm11 bash[43577]: audit 2026-03-09T14:41:37.568850+0000 mgr.y (mgr.44103) 208 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:39.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:39 vm11 bash[43577]: audit 2026-03-09T14:41:37.568850+0000 mgr.y (mgr.44103) 208 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:39.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:39 vm07 bash[55244]: audit 2026-03-09T14:41:37.568850+0000 mgr.y (mgr.44103) 208 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:39.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:39 vm07 bash[55244]: audit 2026-03-09T14:41:37.568850+0000 mgr.y (mgr.44103) 208 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:39.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:39 vm07 bash[56315]: audit 2026-03-09T14:41:37.568850+0000 mgr.y (mgr.44103) 208 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:39.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:39 vm07 bash[56315]: audit 2026-03-09T14:41:37.568850+0000 mgr.y (mgr.44103) 208 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:40.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:40 vm07 bash[55244]: cluster 2026-03-09T14:41:38.558842+0000 mgr.y (mgr.44103) 209 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:40.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:40 vm07 bash[55244]: cluster 2026-03-09T14:41:38.558842+0000 mgr.y (mgr.44103) 209 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:40.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:40 vm07 bash[56315]: cluster 2026-03-09T14:41:38.558842+0000 mgr.y (mgr.44103) 209 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:40.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:40 vm07 bash[56315]: cluster 2026-03-09T14:41:38.558842+0000 mgr.y (mgr.44103) 209 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:40.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:40 vm11 bash[43577]: cluster 2026-03-09T14:41:38.558842+0000 mgr.y (mgr.44103) 209 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:40.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:40 vm11 bash[43577]: cluster 2026-03-09T14:41:38.558842+0000 mgr.y (mgr.44103) 209 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: cluster 2026-03-09T14:41:40.559181+0000 mgr.y (mgr.44103) 210 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: cluster 2026-03-09T14:41:40.559181+0000 mgr.y (mgr.44103) 210 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.837630+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.837630+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.841563+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.841563+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.842217+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.842217+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.842614+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.842614+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.845752+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.845752+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.889337+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.889337+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.890441+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.890441+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.891159+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.891159+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.891714+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.891714+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.892640+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.892640+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: cephadm 2026-03-09T14:41:40.893017+0000 mgr.y (mgr.44103) 211 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: cephadm 2026-03-09T14:41:40.893017+0000 mgr.y (mgr.44103) 211 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.897108+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.897108+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.898777+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.898777+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.903371+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.903371+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.904790+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T14:41:42.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.904790+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.908061+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.908061+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.909359+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.909359+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.912775+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.912775+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.913874+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.913874+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.917483+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.917483+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.918578+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.918578+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.922440+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.922440+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.923533+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.923533+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.928295+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.928295+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.929369+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.929369+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.932919+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.932919+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.933999+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.933999+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.937232+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.937232+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: cephadm 2026-03-09T14:41:40.938883+0000 mgr.y (mgr.44103) 212 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: cephadm 2026-03-09T14:41:40.938883+0000 mgr.y (mgr.44103) 212 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.939021+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:41 vm07 bash[56315]: audit 2026-03-09T14:41:40.939021+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: cluster 2026-03-09T14:41:40.559181+0000 mgr.y (mgr.44103) 210 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: cluster 2026-03-09T14:41:40.559181+0000 mgr.y (mgr.44103) 210 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.837630+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.837630+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.841563+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.841563+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.842217+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.842217+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.842614+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.842614+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.845752+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.845752+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.889337+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.889337+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.890441+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.890441+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.891159+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.891159+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.891714+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.891714+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.892640+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.892640+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: cephadm 2026-03-09T14:41:40.893017+0000 mgr.y (mgr.44103) 211 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: cephadm 2026-03-09T14:41:40.893017+0000 mgr.y (mgr.44103) 211 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.897108+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.897108+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.156 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.898777+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.898777+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.903371+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.903371+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.904790+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.904790+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.908061+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.908061+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.909359+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.909359+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.912775+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.912775+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.913874+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.913874+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.917483+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.917483+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.918578+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.918578+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.922440+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.922440+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.923533+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.923533+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.928295+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.928295+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.929369+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.929369+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.932919+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.932919+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.933999+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.933999+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.937232+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.937232+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: cephadm 2026-03-09T14:41:40.938883+0000 mgr.y (mgr.44103) 212 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: cephadm 2026-03-09T14:41:40.938883+0000 mgr.y (mgr.44103) 212 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.939021+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T14:41:42.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:41 vm07 bash[55244]: audit 2026-03-09T14:41:40.939021+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: cluster 2026-03-09T14:41:40.559181+0000 mgr.y (mgr.44103) 210 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: cluster 2026-03-09T14:41:40.559181+0000 mgr.y (mgr.44103) 210 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.837630+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.837630+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.841563+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.841563+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.842217+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.842217+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.842614+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.842614+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.845752+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.845752+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.889337+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.889337+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.890441+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.890441+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.891159+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.891159+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.891714+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.891714+0000 mon.a (mon.0) 529 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.892640+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.892640+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: cephadm 2026-03-09T14:41:40.893017+0000 mgr.y (mgr.44103) 211 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: cephadm 2026-03-09T14:41:40.893017+0000 mgr.y (mgr.44103) 211 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.897108+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.897108+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.898777+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.898777+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.903371+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.903371+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.904790+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.904790+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.908061+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.908061+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.909359+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.909359+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.912775+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.912775+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.913874+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.913874+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.917483+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.917483+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.918578+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.918578+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.922440+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.922440+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.923533+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.923533+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.928295+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.928295+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.929369+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.929369+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.932919+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.932919+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.933999+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.933999+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.937232+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.937232+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: cephadm 2026-03-09T14:41:40.938883+0000 mgr.y (mgr.44103) 212 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: cephadm 2026-03-09T14:41:40.938883+0000 mgr.y (mgr.44103) 212 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.939021+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T14:41:42.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:41 vm11 bash[43577]: audit 2026-03-09T14:41:40.939021+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cluster 2026-03-09T14:41:41.936709+0000 mon.a (mon.0) 549 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cluster 2026-03-09T14:41:41.936709+0000 mon.a (mon.0) 549 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cluster 2026-03-09T14:41:41.936725+0000 mon.a (mon.0) 550 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cluster 2026-03-09T14:41:41.936725+0000 mon.a (mon.0) 550 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:41.939431+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:41.939431+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cluster 2026-03-09T14:41:41.942551+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cluster 2026-03-09T14:41:41.942551+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:41.945930+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:41.945930+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cephadm 2026-03-09T14:41:41.946466+0000 mgr.y (mgr.44103) 213 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cephadm 2026-03-09T14:41:41.946466+0000 mgr.y (mgr.44103) 213 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:41.951138+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:41.951138+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cephadm 2026-03-09T14:41:42.400523+0000 mgr.y (mgr.44103) 214 : cephadm [INF] Upgrade: Updating rgw.smpl.vm07.tkkeli (1/4) 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cephadm 2026-03-09T14:41:42.400523+0000 mgr.y (mgr.44103) 214 : cephadm [INF] Upgrade: Updating rgw.smpl.vm07.tkkeli (1/4) 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:42.405048+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:42.405048+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:42.410383+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:42.410383+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:42.411301+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:42.411301+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cephadm 2026-03-09T14:41:42.411861+0000 mgr.y (mgr.44103) 215 : cephadm [INF] Deploying daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: cephadm 2026-03-09T14:41:42.411861+0000 mgr.y (mgr.44103) 215 : cephadm [INF] Deploying daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:42.653294+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 bash[56315]: audit 2026-03-09T14:41:42.653294+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:42 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cluster 2026-03-09T14:41:41.936709+0000 mon.a (mon.0) 549 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cluster 2026-03-09T14:41:41.936709+0000 mon.a (mon.0) 549 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cluster 2026-03-09T14:41:41.936725+0000 mon.a (mon.0) 550 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cluster 2026-03-09T14:41:41.936725+0000 mon.a (mon.0) 550 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:41.939431+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:41.939431+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cluster 2026-03-09T14:41:41.942551+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cluster 2026-03-09T14:41:41.942551+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:41.945930+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:41.945930+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cephadm 2026-03-09T14:41:41.946466+0000 mgr.y (mgr.44103) 213 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cephadm 2026-03-09T14:41:41.946466+0000 mgr.y (mgr.44103) 213 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:41.951138+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:41.951138+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.976 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cephadm 2026-03-09T14:41:42.400523+0000 mgr.y (mgr.44103) 214 : cephadm [INF] Upgrade: Updating rgw.smpl.vm07.tkkeli (1/4) 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cephadm 2026-03-09T14:41:42.400523+0000 mgr.y (mgr.44103) 214 : cephadm [INF] Upgrade: Updating rgw.smpl.vm07.tkkeli (1/4) 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:42.405048+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:42.405048+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:42.410383+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:42.410383+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:42.411301+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:42.411301+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cephadm 2026-03-09T14:41:42.411861+0000 mgr.y (mgr.44103) 215 : cephadm [INF] Deploying daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: cephadm 2026-03-09T14:41:42.411861+0000 mgr.y (mgr.44103) 215 : cephadm [INF] Deploying daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:42.653294+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 bash[55244]: audit 2026-03-09T14:41:42.653294+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:42 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:42.977 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:41:42 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:42.977 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:41:42 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:42.977 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:41:42 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:42.977 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:41:42 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:42.977 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:41:42 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:42.977 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:41:42 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:42.977 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:41:42 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cluster 2026-03-09T14:41:41.936709+0000 mon.a (mon.0) 549 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cluster 2026-03-09T14:41:41.936709+0000 mon.a (mon.0) 549 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cluster 2026-03-09T14:41:41.936725+0000 mon.a (mon.0) 550 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cluster 2026-03-09T14:41:41.936725+0000 mon.a (mon.0) 550 : cluster [INF] Cluster is now healthy 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:41.939431+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:41.939431+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cluster 2026-03-09T14:41:41.942551+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cluster 2026-03-09T14:41:41.942551+0000 mon.a (mon.0) 552 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:41.945930+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:41.945930+0000 mon.a (mon.0) 553 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cephadm 2026-03-09T14:41:41.946466+0000 mgr.y (mgr.44103) 213 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cephadm 2026-03-09T14:41:41.946466+0000 mgr.y (mgr.44103) 213 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:41.951138+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:41.951138+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cephadm 2026-03-09T14:41:42.400523+0000 mgr.y (mgr.44103) 214 : cephadm [INF] Upgrade: Updating rgw.smpl.vm07.tkkeli (1/4) 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cephadm 2026-03-09T14:41:42.400523+0000 mgr.y (mgr.44103) 214 : cephadm [INF] Upgrade: Updating rgw.smpl.vm07.tkkeli (1/4) 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:42.405048+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:42.405048+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:42.410383+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:42.410383+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm07.tkkeli", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:42.411301+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:42.411301+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cephadm 2026-03-09T14:41:42.411861+0000 mgr.y (mgr.44103) 215 : cephadm [INF] Deploying daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: cephadm 2026-03-09T14:41:42.411861+0000 mgr.y (mgr.44103) 215 : cephadm [INF] Deploying daemon rgw.smpl.vm07.tkkeli on vm07 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:42.653294+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:43.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:42 vm11 bash[43577]: audit 2026-03-09T14:41:42.653294+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:43.539 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:41:43 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:43.539 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:43 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:43.539 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:41:43 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:43.539 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:41:43 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:43.539 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:43 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:43.539 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:41:43 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:41:43] "GET /metrics HTTP/1.1" 200 38198 "" "Prometheus/2.51.0" 2026-03-09T14:41:43.539 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:41:43 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:43.539 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:41:43 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:43.540 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:41:43 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:43.540 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:41:43 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:44.400 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:44 vm11 bash[41290]: ts=2026-03-09T14:41:44.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: cluster 2026-03-09T14:41:42.559520+0000 mgr.y (mgr.44103) 216 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: cluster 2026-03-09T14:41:42.559520+0000 mgr.y (mgr.44103) 216 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: audit 2026-03-09T14:41:43.573972+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: audit 2026-03-09T14:41:43.573972+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: audit 2026-03-09T14:41:43.581355+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: audit 2026-03-09T14:41:43.581355+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: audit 2026-03-09T14:41:44.237328+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: audit 2026-03-09T14:41:44.237328+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: audit 2026-03-09T14:41:44.240470+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: audit 2026-03-09T14:41:44.240470+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: audit 2026-03-09T14:41:44.241583+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:44.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 bash[56315]: audit 2026-03-09T14:41:44.241583+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: cluster 2026-03-09T14:41:42.559520+0000 mgr.y (mgr.44103) 216 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: cluster 2026-03-09T14:41:42.559520+0000 mgr.y (mgr.44103) 216 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: audit 2026-03-09T14:41:43.573972+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: audit 2026-03-09T14:41:43.573972+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: audit 2026-03-09T14:41:43.581355+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: audit 2026-03-09T14:41:43.581355+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: audit 2026-03-09T14:41:44.237328+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: audit 2026-03-09T14:41:44.237328+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: audit 2026-03-09T14:41:44.240470+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: audit 2026-03-09T14:41:44.240470+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: audit 2026-03-09T14:41:44.241583+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:44.749 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 bash[55244]: audit 2026-03-09T14:41:44.241583+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: cluster 2026-03-09T14:41:42.559520+0000 mgr.y (mgr.44103) 216 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: cluster 2026-03-09T14:41:42.559520+0000 mgr.y (mgr.44103) 216 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 266 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: audit 2026-03-09T14:41:43.573972+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: audit 2026-03-09T14:41:43.573972+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: audit 2026-03-09T14:41:43.581355+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: audit 2026-03-09T14:41:43.581355+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: audit 2026-03-09T14:41:44.237328+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: audit 2026-03-09T14:41:44.237328+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: audit 2026-03-09T14:41:44.240470+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: audit 2026-03-09T14:41:44.240470+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.urmgxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: audit 2026-03-09T14:41:44.241583+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:44.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:44 vm11 bash[43577]: audit 2026-03-09T14:41:44.241583+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:45.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.154 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:41:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.154 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:41:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.154 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:41:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.154 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:41:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.154 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:41:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.154 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:41:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.154 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:41:44 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.459 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:45 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.459 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:45 vm07 bash[56315]: cephadm 2026-03-09T14:41:44.232601+0000 mgr.y (mgr.44103) 217 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.urmgxb (2/4) 2026-03-09T14:41:45.459 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:45 vm07 bash[56315]: cephadm 2026-03-09T14:41:44.232601+0000 mgr.y (mgr.44103) 217 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.urmgxb (2/4) 2026-03-09T14:41:45.459 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:45 vm07 bash[56315]: cephadm 2026-03-09T14:41:44.242521+0000 mgr.y (mgr.44103) 218 : cephadm [INF] Deploying daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:41:45.459 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:45 vm07 bash[56315]: cephadm 2026-03-09T14:41:44.242521+0000 mgr.y (mgr.44103) 218 : cephadm [INF] Deploying daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:41:45.459 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:41:45 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.459 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:41:45 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.459 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:41:45 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.459 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:45 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.459 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:45 vm07 bash[55244]: cephadm 2026-03-09T14:41:44.232601+0000 mgr.y (mgr.44103) 217 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.urmgxb (2/4) 2026-03-09T14:41:45.459 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:45 vm07 bash[55244]: cephadm 2026-03-09T14:41:44.232601+0000 mgr.y (mgr.44103) 217 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.urmgxb (2/4) 2026-03-09T14:41:45.459 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:45 vm07 bash[55244]: cephadm 2026-03-09T14:41:44.242521+0000 mgr.y (mgr.44103) 218 : cephadm [INF] Deploying daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:41:45.460 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:45 vm07 bash[55244]: cephadm 2026-03-09T14:41:44.242521+0000 mgr.y (mgr.44103) 218 : cephadm [INF] Deploying daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:41:45.460 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:41:45 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.460 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:41:45 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.460 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:41:45 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.460 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:41:45 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:45.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:45 vm11 bash[43577]: cephadm 2026-03-09T14:41:44.232601+0000 mgr.y (mgr.44103) 217 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.urmgxb (2/4) 2026-03-09T14:41:45.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:45 vm11 bash[43577]: cephadm 2026-03-09T14:41:44.232601+0000 mgr.y (mgr.44103) 217 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.urmgxb (2/4) 2026-03-09T14:41:45.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:45 vm11 bash[43577]: cephadm 2026-03-09T14:41:44.242521+0000 mgr.y (mgr.44103) 218 : cephadm [INF] Deploying daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:41:45.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:45 vm11 bash[43577]: cephadm 2026-03-09T14:41:44.242521+0000 mgr.y (mgr.44103) 218 : cephadm [INF] Deploying daemon rgw.foo.vm07.urmgxb on vm07 2026-03-09T14:41:46.673 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:46.673 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:41:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: cluster 2026-03-09T14:41:44.559940+0000 mgr.y (mgr.44103) 219 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 270 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: cluster 2026-03-09T14:41:44.559940+0000 mgr.y (mgr.44103) 219 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 270 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: audit 2026-03-09T14:41:45.495937+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: audit 2026-03-09T14:41:45.495937+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: audit 2026-03-09T14:41:45.503519+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: audit 2026-03-09T14:41:45.503519+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: audit 2026-03-09T14:41:46.124526+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: audit 2026-03-09T14:41:46.124526+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: audit 2026-03-09T14:41:46.126682+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: audit 2026-03-09T14:41:46.126682+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: audit 2026-03-09T14:41:46.127810+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 bash[43577]: audit 2026-03-09T14:41:46.127810+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:46.673 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:41:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:46.673 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:41:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:46.673 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:41:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:46.673 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:46.673 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:46.673 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:41:46 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: cluster 2026-03-09T14:41:44.559940+0000 mgr.y (mgr.44103) 219 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 270 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: cluster 2026-03-09T14:41:44.559940+0000 mgr.y (mgr.44103) 219 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 270 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: audit 2026-03-09T14:41:45.495937+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: audit 2026-03-09T14:41:45.495937+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: audit 2026-03-09T14:41:45.503519+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: audit 2026-03-09T14:41:45.503519+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: audit 2026-03-09T14:41:46.124526+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: audit 2026-03-09T14:41:46.124526+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: audit 2026-03-09T14:41:46.126682+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: audit 2026-03-09T14:41:46.126682+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: audit 2026-03-09T14:41:46.127810+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:46 vm07 bash[55244]: audit 2026-03-09T14:41:46.127810+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: cluster 2026-03-09T14:41:44.559940+0000 mgr.y (mgr.44103) 219 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 270 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: cluster 2026-03-09T14:41:44.559940+0000 mgr.y (mgr.44103) 219 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 270 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: audit 2026-03-09T14:41:45.495937+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: audit 2026-03-09T14:41:45.495937+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: audit 2026-03-09T14:41:45.503519+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: audit 2026-03-09T14:41:45.503519+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: audit 2026-03-09T14:41:46.124526+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: audit 2026-03-09T14:41:46.124526+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: audit 2026-03-09T14:41:46.126682+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: audit 2026-03-09T14:41:46.126682+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm11.ncyump", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: audit 2026-03-09T14:41:46.127810+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:46 vm07 bash[56315]: audit 2026-03-09T14:41:46.127810+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:47.219 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:47.219 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:41:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:47.219 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:47.219 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:41:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:47.219 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:41:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:47.219 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:41:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:47.219 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:47.219 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:46 vm11 bash[41290]: ts=2026-03-09T14:41:46.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:47.220 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:47.220 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:41:47 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:47.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:47 vm11 bash[43577]: cephadm 2026-03-09T14:41:46.120201+0000 mgr.y (mgr.44103) 220 : cephadm [INF] Upgrade: Updating rgw.foo.vm11.ncyump (3/4) 2026-03-09T14:41:47.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:47 vm11 bash[43577]: cephadm 2026-03-09T14:41:46.120201+0000 mgr.y (mgr.44103) 220 : cephadm [INF] Upgrade: Updating rgw.foo.vm11.ncyump (3/4) 2026-03-09T14:41:47.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:47 vm11 bash[43577]: cephadm 2026-03-09T14:41:46.128516+0000 mgr.y (mgr.44103) 221 : cephadm [INF] Deploying daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:41:47.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:47 vm11 bash[43577]: cephadm 2026-03-09T14:41:46.128516+0000 mgr.y (mgr.44103) 221 : cephadm [INF] Deploying daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:41:47.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:47 vm11 bash[43577]: audit 2026-03-09T14:41:47.138667+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:47 vm11 bash[43577]: audit 2026-03-09T14:41:47.138667+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:47 vm11 bash[43577]: audit 2026-03-09T14:41:47.144241+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:47 vm11 bash[43577]: audit 2026-03-09T14:41:47.144241+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:47 vm07 bash[55244]: cephadm 2026-03-09T14:41:46.120201+0000 mgr.y (mgr.44103) 220 : cephadm [INF] Upgrade: Updating rgw.foo.vm11.ncyump (3/4) 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:47 vm07 bash[55244]: cephadm 2026-03-09T14:41:46.120201+0000 mgr.y (mgr.44103) 220 : cephadm [INF] Upgrade: Updating rgw.foo.vm11.ncyump (3/4) 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:47 vm07 bash[55244]: cephadm 2026-03-09T14:41:46.128516+0000 mgr.y (mgr.44103) 221 : cephadm [INF] Deploying daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:47 vm07 bash[55244]: cephadm 2026-03-09T14:41:46.128516+0000 mgr.y (mgr.44103) 221 : cephadm [INF] Deploying daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:47 vm07 bash[55244]: audit 2026-03-09T14:41:47.138667+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:47 vm07 bash[55244]: audit 2026-03-09T14:41:47.138667+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:47 vm07 bash[55244]: audit 2026-03-09T14:41:47.144241+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:47 vm07 bash[55244]: audit 2026-03-09T14:41:47.144241+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:47 vm07 bash[56315]: cephadm 2026-03-09T14:41:46.120201+0000 mgr.y (mgr.44103) 220 : cephadm [INF] Upgrade: Updating rgw.foo.vm11.ncyump (3/4) 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:47 vm07 bash[56315]: cephadm 2026-03-09T14:41:46.120201+0000 mgr.y (mgr.44103) 220 : cephadm [INF] Upgrade: Updating rgw.foo.vm11.ncyump (3/4) 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:47 vm07 bash[56315]: cephadm 2026-03-09T14:41:46.128516+0000 mgr.y (mgr.44103) 221 : cephadm [INF] Deploying daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:47 vm07 bash[56315]: cephadm 2026-03-09T14:41:46.128516+0000 mgr.y (mgr.44103) 221 : cephadm [INF] Deploying daemon rgw.foo.vm11.ncyump on vm11 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:47 vm07 bash[56315]: audit 2026-03-09T14:41:47.138667+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:47 vm07 bash[56315]: audit 2026-03-09T14:41:47.138667+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:47 vm07 bash[56315]: audit 2026-03-09T14:41:47.144241+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:47.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:47 vm07 bash[56315]: audit 2026-03-09T14:41:47.144241+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:48.252 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.252 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.252 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.253 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.253 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.253 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.253 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.253 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.601 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:48 vm11 bash[43577]: cluster 2026-03-09T14:41:46.560517+0000 mgr.y (mgr.44103) 222 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 274 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 0 B/s wr, 70 op/s 2026-03-09T14:41:48.601 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:48 vm11 bash[43577]: cluster 2026-03-09T14:41:46.560517+0000 mgr.y (mgr.44103) 222 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 274 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 0 B/s wr, 70 op/s 2026-03-09T14:41:48.601 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:48 vm11 bash[43577]: audit 2026-03-09T14:41:47.698837+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:48.601 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:48 vm11 bash[43577]: audit 2026-03-09T14:41:47.698837+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:48.601 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:48 vm11 bash[43577]: audit 2026-03-09T14:41:47.701187+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:48.601 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:48 vm11 bash[43577]: audit 2026-03-09T14:41:47.701187+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:48.601 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:48 vm11 bash[43577]: audit 2026-03-09T14:41:47.702031+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:48.601 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:48 vm11 bash[43577]: audit 2026-03-09T14:41:47.702031+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:48.875 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.875 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.875 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.875 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.875 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.876 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.876 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.876 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.876 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:41:48 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:48 vm07 bash[55244]: cluster 2026-03-09T14:41:46.560517+0000 mgr.y (mgr.44103) 222 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 274 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 0 B/s wr, 70 op/s 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:48 vm07 bash[55244]: cluster 2026-03-09T14:41:46.560517+0000 mgr.y (mgr.44103) 222 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 274 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 0 B/s wr, 70 op/s 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:48 vm07 bash[55244]: audit 2026-03-09T14:41:47.698837+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:48 vm07 bash[55244]: audit 2026-03-09T14:41:47.698837+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:48 vm07 bash[55244]: audit 2026-03-09T14:41:47.701187+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:48 vm07 bash[55244]: audit 2026-03-09T14:41:47.701187+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:48 vm07 bash[55244]: audit 2026-03-09T14:41:47.702031+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:48 vm07 bash[55244]: audit 2026-03-09T14:41:47.702031+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:48 vm07 bash[56315]: cluster 2026-03-09T14:41:46.560517+0000 mgr.y (mgr.44103) 222 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 274 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 0 B/s wr, 70 op/s 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:48 vm07 bash[56315]: cluster 2026-03-09T14:41:46.560517+0000 mgr.y (mgr.44103) 222 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 274 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s rd, 0 B/s wr, 70 op/s 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:48 vm07 bash[56315]: audit 2026-03-09T14:41:47.698837+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:48 vm07 bash[56315]: audit 2026-03-09T14:41:47.698837+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:48 vm07 bash[56315]: audit 2026-03-09T14:41:47.701187+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:48 vm07 bash[56315]: audit 2026-03-09T14:41:47.701187+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.smpl.vm11.ocxkef", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:48 vm07 bash[56315]: audit 2026-03-09T14:41:47.702031+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:48.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:48 vm07 bash[56315]: audit 2026-03-09T14:41:47.702031+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:41:49.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:49 vm11 bash[43577]: audit 2026-03-09T14:41:47.579006+0000 mgr.y (mgr.44103) 223 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:49.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:49 vm11 bash[43577]: audit 2026-03-09T14:41:47.579006+0000 mgr.y (mgr.44103) 223 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:49.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:49 vm11 bash[43577]: cephadm 2026-03-09T14:41:47.694131+0000 mgr.y (mgr.44103) 224 : cephadm [INF] Upgrade: Updating rgw.smpl.vm11.ocxkef (4/4) 2026-03-09T14:41:49.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:49 vm11 bash[43577]: cephadm 2026-03-09T14:41:47.694131+0000 mgr.y (mgr.44103) 224 : cephadm [INF] Upgrade: Updating rgw.smpl.vm11.ocxkef (4/4) 2026-03-09T14:41:49.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:49 vm11 bash[43577]: cephadm 2026-03-09T14:41:47.702502+0000 mgr.y (mgr.44103) 225 : cephadm [INF] Deploying daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:41:49.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:49 vm11 bash[43577]: cephadm 2026-03-09T14:41:47.702502+0000 mgr.y (mgr.44103) 225 : cephadm [INF] Deploying daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:41:49.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:49 vm11 bash[43577]: audit 2026-03-09T14:41:48.882719+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:49 vm11 bash[43577]: audit 2026-03-09T14:41:48.882719+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:49 vm11 bash[43577]: audit 2026-03-09T14:41:48.893738+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:49 vm11 bash[43577]: audit 2026-03-09T14:41:48.893738+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:49 vm07 bash[55244]: audit 2026-03-09T14:41:47.579006+0000 mgr.y (mgr.44103) 223 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:49 vm07 bash[55244]: audit 2026-03-09T14:41:47.579006+0000 mgr.y (mgr.44103) 223 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:49 vm07 bash[55244]: cephadm 2026-03-09T14:41:47.694131+0000 mgr.y (mgr.44103) 224 : cephadm [INF] Upgrade: Updating rgw.smpl.vm11.ocxkef (4/4) 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:49 vm07 bash[55244]: cephadm 2026-03-09T14:41:47.694131+0000 mgr.y (mgr.44103) 224 : cephadm [INF] Upgrade: Updating rgw.smpl.vm11.ocxkef (4/4) 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:49 vm07 bash[55244]: cephadm 2026-03-09T14:41:47.702502+0000 mgr.y (mgr.44103) 225 : cephadm [INF] Deploying daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:49 vm07 bash[55244]: cephadm 2026-03-09T14:41:47.702502+0000 mgr.y (mgr.44103) 225 : cephadm [INF] Deploying daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:49 vm07 bash[55244]: audit 2026-03-09T14:41:48.882719+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:49 vm07 bash[55244]: audit 2026-03-09T14:41:48.882719+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:49 vm07 bash[55244]: audit 2026-03-09T14:41:48.893738+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:49 vm07 bash[55244]: audit 2026-03-09T14:41:48.893738+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:49 vm07 bash[56315]: audit 2026-03-09T14:41:47.579006+0000 mgr.y (mgr.44103) 223 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:49 vm07 bash[56315]: audit 2026-03-09T14:41:47.579006+0000 mgr.y (mgr.44103) 223 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:49 vm07 bash[56315]: cephadm 2026-03-09T14:41:47.694131+0000 mgr.y (mgr.44103) 224 : cephadm [INF] Upgrade: Updating rgw.smpl.vm11.ocxkef (4/4) 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:49 vm07 bash[56315]: cephadm 2026-03-09T14:41:47.694131+0000 mgr.y (mgr.44103) 224 : cephadm [INF] Upgrade: Updating rgw.smpl.vm11.ocxkef (4/4) 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:49 vm07 bash[56315]: cephadm 2026-03-09T14:41:47.702502+0000 mgr.y (mgr.44103) 225 : cephadm [INF] Deploying daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:49 vm07 bash[56315]: cephadm 2026-03-09T14:41:47.702502+0000 mgr.y (mgr.44103) 225 : cephadm [INF] Deploying daemon rgw.smpl.vm11.ocxkef on vm11 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:49 vm07 bash[56315]: audit 2026-03-09T14:41:48.882719+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:49 vm07 bash[56315]: audit 2026-03-09T14:41:48.882719+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:49 vm07 bash[56315]: audit 2026-03-09T14:41:48.893738+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:49.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:49 vm07 bash[56315]: audit 2026-03-09T14:41:48.893738+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:50.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:50 vm11 bash[43577]: cluster 2026-03-09T14:41:48.560865+0000 mgr.y (mgr.44103) 226 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 78 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T14:41:50.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:50 vm11 bash[43577]: cluster 2026-03-09T14:41:48.560865+0000 mgr.y (mgr.44103) 226 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 78 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T14:41:50.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:50 vm07 bash[55244]: cluster 2026-03-09T14:41:48.560865+0000 mgr.y (mgr.44103) 226 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 78 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T14:41:50.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:50 vm07 bash[55244]: cluster 2026-03-09T14:41:48.560865+0000 mgr.y (mgr.44103) 226 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 78 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T14:41:50.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:50 vm07 bash[56315]: cluster 2026-03-09T14:41:48.560865+0000 mgr.y (mgr.44103) 226 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 78 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T14:41:50.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:50 vm07 bash[56315]: cluster 2026-03-09T14:41:48.560865+0000 mgr.y (mgr.44103) 226 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 78 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-09T14:41:52.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:52 vm11 bash[43577]: cluster 2026-03-09T14:41:50.561241+0000 mgr.y (mgr.44103) 227 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 139 KiB/s rd, 307 B/s wr, 214 op/s 2026-03-09T14:41:52.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:52 vm11 bash[43577]: cluster 2026-03-09T14:41:50.561241+0000 mgr.y (mgr.44103) 227 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 139 KiB/s rd, 307 B/s wr, 214 op/s 2026-03-09T14:41:52.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:52 vm07 bash[55244]: cluster 2026-03-09T14:41:50.561241+0000 mgr.y (mgr.44103) 227 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 139 KiB/s rd, 307 B/s wr, 214 op/s 2026-03-09T14:41:52.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:52 vm07 bash[55244]: cluster 2026-03-09T14:41:50.561241+0000 mgr.y (mgr.44103) 227 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 139 KiB/s rd, 307 B/s wr, 214 op/s 2026-03-09T14:41:52.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:52 vm07 bash[56315]: cluster 2026-03-09T14:41:50.561241+0000 mgr.y (mgr.44103) 227 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 139 KiB/s rd, 307 B/s wr, 214 op/s 2026-03-09T14:41:52.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:52 vm07 bash[56315]: cluster 2026-03-09T14:41:50.561241+0000 mgr.y (mgr.44103) 227 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 278 MiB used, 160 GiB / 160 GiB avail; 139 KiB/s rd, 307 B/s wr, 214 op/s 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:53 vm11 bash[43577]: cluster 2026-03-09T14:41:52.561661+0000 mgr.y (mgr.44103) 228 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 216 KiB/s rd, 289 B/s wr, 333 op/s 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:53 vm11 bash[43577]: cluster 2026-03-09T14:41:52.561661+0000 mgr.y (mgr.44103) 228 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 216 KiB/s rd, 289 B/s wr, 333 op/s 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:53 vm11 bash[43577]: audit 2026-03-09T14:41:52.579404+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:53 vm11 bash[43577]: audit 2026-03-09T14:41:52.579404+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:53 vm11 bash[43577]: audit 2026-03-09T14:41:52.580488+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:53 vm11 bash[43577]: audit 2026-03-09T14:41:52.580488+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:53 vm07 bash[56315]: cluster 2026-03-09T14:41:52.561661+0000 mgr.y (mgr.44103) 228 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 216 KiB/s rd, 289 B/s wr, 333 op/s 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:53 vm07 bash[56315]: cluster 2026-03-09T14:41:52.561661+0000 mgr.y (mgr.44103) 228 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 216 KiB/s rd, 289 B/s wr, 333 op/s 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:53 vm07 bash[56315]: audit 2026-03-09T14:41:52.579404+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:53 vm07 bash[56315]: audit 2026-03-09T14:41:52.579404+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:53 vm07 bash[56315]: audit 2026-03-09T14:41:52.580488+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:53 vm07 bash[56315]: audit 2026-03-09T14:41:52.580488+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:41:53 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:41:53] "GET /metrics HTTP/1.1" 200 38190 "" "Prometheus/2.51.0" 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:53 vm07 bash[55244]: cluster 2026-03-09T14:41:52.561661+0000 mgr.y (mgr.44103) 228 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 216 KiB/s rd, 289 B/s wr, 333 op/s 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:53 vm07 bash[55244]: cluster 2026-03-09T14:41:52.561661+0000 mgr.y (mgr.44103) 228 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 216 KiB/s rd, 289 B/s wr, 333 op/s 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:53 vm07 bash[55244]: audit 2026-03-09T14:41:52.579404+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:53 vm07 bash[55244]: audit 2026-03-09T14:41:52.579404+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:53 vm07 bash[55244]: audit 2026-03-09T14:41:52.580488+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:53.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:53 vm07 bash[55244]: audit 2026-03-09T14:41:52.580488+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:41:54.252 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:54 vm11 bash[41290]: ts=2026-03-09T14:41:54.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.084406+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.084406+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.092182+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.092182+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.206539+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.206539+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.211397+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.211397+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.627066+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.627066+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.631663+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.631663+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.751923+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.751923+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.758260+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:55 vm07 bash[55244]: audit 2026-03-09T14:41:54.758260+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.084406+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.084406+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.092182+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.092182+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.206539+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.206539+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.211397+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.211397+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.627066+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.627066+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.631663+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.631663+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.751923+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.751923+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.758260+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:55 vm07 bash[56315]: audit 2026-03-09T14:41:54.758260+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.084406+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.084406+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.092182+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.092182+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.206539+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.206539+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.211397+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.211397+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.627066+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.627066+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.631663+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.631663+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.751923+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.751923+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.758260+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:55.503 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:55 vm11 bash[43577]: audit 2026-03-09T14:41:54.758260+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:41:56.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:56 vm11 bash[43577]: cluster 2026-03-09T14:41:54.562109+0000 mgr.y (mgr.44103) 229 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:56.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:56 vm11 bash[43577]: cluster 2026-03-09T14:41:54.562109+0000 mgr.y (mgr.44103) 229 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:56.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:56 vm07 bash[55244]: cluster 2026-03-09T14:41:54.562109+0000 mgr.y (mgr.44103) 229 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:56.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:56 vm07 bash[55244]: cluster 2026-03-09T14:41:54.562109+0000 mgr.y (mgr.44103) 229 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:56.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:56 vm07 bash[56315]: cluster 2026-03-09T14:41:54.562109+0000 mgr.y (mgr.44103) 229 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:56.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:56 vm07 bash[56315]: cluster 2026-03-09T14:41:54.562109+0000 mgr.y (mgr.44103) 229 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:57.252 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:41:56 vm11 bash[41290]: ts=2026-03-09T14:41:56.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:41:58.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:58 vm11 bash[43577]: cluster 2026-03-09T14:41:56.562567+0000 mgr.y (mgr.44103) 230 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:58.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:58 vm11 bash[43577]: cluster 2026-03-09T14:41:56.562567+0000 mgr.y (mgr.44103) 230 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:58.653 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:58 vm07 bash[55244]: cluster 2026-03-09T14:41:56.562567+0000 mgr.y (mgr.44103) 230 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:58.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:58 vm07 bash[55244]: cluster 2026-03-09T14:41:56.562567+0000 mgr.y (mgr.44103) 230 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:58.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:58 vm07 bash[56315]: cluster 2026-03-09T14:41:56.562567+0000 mgr.y (mgr.44103) 230 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:58.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:58 vm07 bash[56315]: cluster 2026-03-09T14:41:56.562567+0000 mgr.y (mgr.44103) 230 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 217 KiB/s rd, 341 B/s wr, 335 op/s 2026-03-09T14:41:59.470 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:59 vm11 bash[43577]: audit 2026-03-09T14:41:57.587023+0000 mgr.y (mgr.44103) 231 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:59.470 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:41:59 vm11 bash[43577]: audit 2026-03-09T14:41:57.587023+0000 mgr.y (mgr.44103) 231 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:59.532 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:59 vm07 bash[56315]: audit 2026-03-09T14:41:57.587023+0000 mgr.y (mgr.44103) 231 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:59.533 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:41:59 vm07 bash[56315]: audit 2026-03-09T14:41:57.587023+0000 mgr.y (mgr.44103) 231 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:59.533 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:59 vm07 bash[55244]: audit 2026-03-09T14:41:57.587023+0000 mgr.y (mgr.44103) 231 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:41:59.533 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:41:59 vm07 bash[55244]: audit 2026-03-09T14:41:57.587023+0000 mgr.y (mgr.44103) 231 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:00.403 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:00 vm11 bash[43577]: cluster 2026-03-09T14:41:58.562853+0000 mgr.y (mgr.44103) 232 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 180 KiB/s rd, 341 B/s wr, 278 op/s 2026-03-09T14:42:00.403 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:00 vm11 bash[43577]: cluster 2026-03-09T14:41:58.562853+0000 mgr.y (mgr.44103) 232 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 180 KiB/s rd, 341 B/s wr, 278 op/s 2026-03-09T14:42:00.403 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:00 vm11 bash[43577]: audit 2026-03-09T14:42:00.211451+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.403 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:00 vm11 bash[43577]: audit 2026-03-09T14:42:00.211451+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.403 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:00 vm11 bash[43577]: audit 2026-03-09T14:42:00.218447+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.403 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:00 vm11 bash[43577]: audit 2026-03-09T14:42:00.218447+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:00 vm07 bash[56315]: cluster 2026-03-09T14:41:58.562853+0000 mgr.y (mgr.44103) 232 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 180 KiB/s rd, 341 B/s wr, 278 op/s 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:00 vm07 bash[56315]: cluster 2026-03-09T14:41:58.562853+0000 mgr.y (mgr.44103) 232 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 180 KiB/s rd, 341 B/s wr, 278 op/s 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:00 vm07 bash[56315]: audit 2026-03-09T14:42:00.211451+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:00 vm07 bash[56315]: audit 2026-03-09T14:42:00.211451+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:00 vm07 bash[56315]: audit 2026-03-09T14:42:00.218447+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:00 vm07 bash[56315]: audit 2026-03-09T14:42:00.218447+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:00 vm07 bash[55244]: cluster 2026-03-09T14:41:58.562853+0000 mgr.y (mgr.44103) 232 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 180 KiB/s rd, 341 B/s wr, 278 op/s 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:00 vm07 bash[55244]: cluster 2026-03-09T14:41:58.562853+0000 mgr.y (mgr.44103) 232 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 180 KiB/s rd, 341 B/s wr, 278 op/s 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:00 vm07 bash[55244]: audit 2026-03-09T14:42:00.211451+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:00 vm07 bash[55244]: audit 2026-03-09T14:42:00.211451+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:00 vm07 bash[55244]: audit 2026-03-09T14:42:00.218447+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:00.531 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:00 vm07 bash[55244]: audit 2026-03-09T14:42:00.218447+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.404 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:42:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.205665+0000 mgr.y (mgr.44103) 233 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.205665+0000 mgr.y (mgr.44103) 233 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.329739+0000 mgr.y (mgr.44103) 234 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.329739+0000 mgr.y (mgr.44103) 234 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.335184+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.335184+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.341630+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.341630+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.342727+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.342727+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.343347+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.343347+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.348110+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.348110+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.391106+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.391106+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.392165+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.392165+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.392876+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.392876+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.393392+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.393392+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.394177+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.394177+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.395267+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.395267+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.395924+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.395924+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.396290+0000 mgr.y (mgr.44103) 235 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.396290+0000 mgr.y (mgr.44103) 235 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.400031+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.400031+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.401477+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.401477+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.407478+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]': finished 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.407478+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]': finished 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.411590+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.411590+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.416278+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]': finished 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.416278+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]': finished 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.417546+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.417546+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.422969+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]': finished 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.422969+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]': finished 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.424071+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.424071+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.429182+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]': finished 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.429182+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]': finished 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.430936+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.430936+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.431637+0000 mgr.y (mgr.44103) 236 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.431637+0000 mgr.y (mgr.44103) 236 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.435994+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.435994+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.437512+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.437512+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.438508+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.438508+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.438931+0000 mgr.y (mgr.44103) 237 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.438931+0000 mgr.y (mgr.44103) 237 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.442216+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.442216+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.840121+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.840121+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.844286+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.844286+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.848089+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:01 vm07 bash[55244]: audit 2026-03-09T14:42:00.848089+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.405 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:42:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:01.406 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:42:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:01.406 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:42:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:01.406 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:42:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.205665+0000 mgr.y (mgr.44103) 233 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.205665+0000 mgr.y (mgr.44103) 233 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.329739+0000 mgr.y (mgr.44103) 234 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.329739+0000 mgr.y (mgr.44103) 234 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.335184+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.335184+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.341630+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.341630+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.342727+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.342727+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.343347+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.343347+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.348110+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.348110+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.391106+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.391106+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.392165+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.392165+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.392876+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.392876+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.393392+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.393392+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.394177+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.394177+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.395267+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.395267+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.395924+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.395924+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.396290+0000 mgr.y (mgr.44103) 235 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.396290+0000 mgr.y (mgr.44103) 235 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.400031+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.400031+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.401477+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.401477+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.407478+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]': finished 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.407478+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]': finished 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.411590+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.411590+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.416278+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]': finished 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.416278+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]': finished 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.417546+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.417546+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]: dispatch 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.422969+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]': finished 2026-03-09T14:42:01.406 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.422969+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]': finished 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.424071+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.424071+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.429182+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]': finished 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.429182+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]': finished 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.430936+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.430936+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.431637+0000 mgr.y (mgr.44103) 236 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.431637+0000 mgr.y (mgr.44103) 236 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.435994+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.435994+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.437512+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.437512+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.438508+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.438508+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.438931+0000 mgr.y (mgr.44103) 237 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.438931+0000 mgr.y (mgr.44103) 237 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.442216+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.442216+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.840121+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.840121+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.844286+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.844286+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.848089+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 bash[56315]: audit 2026-03-09T14:42:00.848089+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:01.407 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:42:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:01.407 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:42:01 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.205665+0000 mgr.y (mgr.44103) 233 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.205665+0000 mgr.y (mgr.44103) 233 : cephadm [INF] Detected new or changed devices on vm11 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.329739+0000 mgr.y (mgr.44103) 234 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.329739+0000 mgr.y (mgr.44103) 234 : cephadm [INF] Detected new or changed devices on vm07 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.335184+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.335184+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.341630+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.341630+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.342727+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.342727+0000 mon.a (mon.0) 590 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.343347+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.343347+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.348110+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.348110+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.391106+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.391106+0000 mon.a (mon.0) 593 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.392165+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.392165+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.392876+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.392876+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.393392+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.393392+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.394177+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.394177+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.395267+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.395267+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.395924+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.395924+0000 mon.a (mon.0) 599 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.396290+0000 mgr.y (mgr.44103) 235 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.396290+0000 mgr.y (mgr.44103) 235 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.400031+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.400031+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.401477+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.401477+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.407478+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]': finished 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.407478+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.urmgxb"}]': finished 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.411590+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.411590+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]: dispatch 2026-03-09T14:42:01.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.416278+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]': finished 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.416278+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm11.ncyump"}]': finished 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.417546+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.417546+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.422969+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]': finished 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.422969+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm07.tkkeli"}]': finished 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.424071+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.424071+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.429182+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]': finished 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.429182+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.smpl.vm11.ocxkef"}]': finished 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.430936+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.430936+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.431637+0000 mgr.y (mgr.44103) 236 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.431637+0000 mgr.y (mgr.44103) 236 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.435994+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.435994+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.437512+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.437512+0000 mon.a (mon.0) 611 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.438508+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.438508+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.438931+0000 mgr.y (mgr.44103) 237 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.438931+0000 mgr.y (mgr.44103) 237 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.442216+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.442216+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.840121+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.840121+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.844286+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.844286+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm07.ohlmos", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.848089+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:01.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:01 vm11 bash[43577]: audit 2026-03-09T14:42:00.848089+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:02.546 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:42:02.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:02 vm11 bash[43577]: cluster 2026-03-09T14:42:00.563234+0000 mgr.y (mgr.44103) 238 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 153 KiB/s rd, 341 B/s wr, 235 op/s 2026-03-09T14:42:02.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:02 vm11 bash[43577]: cluster 2026-03-09T14:42:00.563234+0000 mgr.y (mgr.44103) 238 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 153 KiB/s rd, 341 B/s wr, 235 op/s 2026-03-09T14:42:02.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:02 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.835894+0000 mgr.y (mgr.44103) 239 : cephadm [INF] Upgrade: Updating iscsi.foo.vm07.ohlmos 2026-03-09T14:42:02.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:02 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.835894+0000 mgr.y (mgr.44103) 239 : cephadm [INF] Upgrade: Updating iscsi.foo.vm07.ohlmos 2026-03-09T14:42:02.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:02 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.848655+0000 mgr.y (mgr.44103) 240 : cephadm [INF] Deploying daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:42:02.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:02 vm11 bash[43577]: cephadm 2026-03-09T14:42:00.848655+0000 mgr.y (mgr.44103) 240 : cephadm [INF] Deploying daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:02 vm07 bash[55244]: cluster 2026-03-09T14:42:00.563234+0000 mgr.y (mgr.44103) 238 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 153 KiB/s rd, 341 B/s wr, 235 op/s 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:02 vm07 bash[55244]: cluster 2026-03-09T14:42:00.563234+0000 mgr.y (mgr.44103) 238 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 153 KiB/s rd, 341 B/s wr, 235 op/s 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:02 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.835894+0000 mgr.y (mgr.44103) 239 : cephadm [INF] Upgrade: Updating iscsi.foo.vm07.ohlmos 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:02 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.835894+0000 mgr.y (mgr.44103) 239 : cephadm [INF] Upgrade: Updating iscsi.foo.vm07.ohlmos 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:02 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.848655+0000 mgr.y (mgr.44103) 240 : cephadm [INF] Deploying daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:02 vm07 bash[55244]: cephadm 2026-03-09T14:42:00.848655+0000 mgr.y (mgr.44103) 240 : cephadm [INF] Deploying daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:02 vm07 bash[56315]: cluster 2026-03-09T14:42:00.563234+0000 mgr.y (mgr.44103) 238 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 153 KiB/s rd, 341 B/s wr, 235 op/s 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:02 vm07 bash[56315]: cluster 2026-03-09T14:42:00.563234+0000 mgr.y (mgr.44103) 238 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 153 KiB/s rd, 341 B/s wr, 235 op/s 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:02 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.835894+0000 mgr.y (mgr.44103) 239 : cephadm [INF] Upgrade: Updating iscsi.foo.vm07.ohlmos 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:02 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.835894+0000 mgr.y (mgr.44103) 239 : cephadm [INF] Upgrade: Updating iscsi.foo.vm07.ohlmos 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:02 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.848655+0000 mgr.y (mgr.44103) 240 : cephadm [INF] Deploying daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:42:02.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:02 vm07 bash[56315]: cephadm 2026-03-09T14:42:00.848655+0000 mgr.y (mgr.44103) 240 : cephadm [INF] Deploying daemon iscsi.foo.vm07.ohlmos on vm07 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (4m) 8s ago 9m 13.7M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (4m) 8s ago 9m 38.9M - dad864ee21e9 614f6a00be7a 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (3m) 8s ago 8m 43.4M - 3.5 e1d6a67b021e e3b30dab288c 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 running (3m) 8s ago 11m 465M - 19.2.3-678-ge911bdeb 654f31e6858e d35dddd392d1 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (4m) 8s ago 12m 534M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (3m) 8s ago 12m 51.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bcdaa5dfc948 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (2m) 8s ago 12m 43.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1caba9bf8a13 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (3m) 8s ago 12m 49.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ff7dfe3a6c7c 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (4m) 8s ago 9m 7692k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (4m) 8s ago 9m 7715k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:42:02.922 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (118s) 8s ago 11m 52.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 24632814894d 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (101s) 8s ago 11m 75.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1f773b5d0f68 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (2m) 8s ago 11m 70.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7d943c2f091c 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (2m) 8s ago 10m 56.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7c234b83449a 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (84s) 8s ago 10m 53.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 811379ab4ba5 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (67s) 8s ago 10m 71.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e bc7e71aa5718 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (50s) 8s ago 10m 47.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 20bc2716b966 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (34s) 8s ago 9m 70.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2557f7ad255a 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (3m) 8s ago 9m 40.6M - 2.51.0 1d3b7f56885b e88f0339687c 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (17s) 8s ago 8m 91.1M - 19.2.3-678-ge911bdeb 654f31e6858e df702c44464d 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (15s) 8s ago 8m 91.0M - 19.2.3-678-ge911bdeb 654f31e6858e 75ca9d41b995 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (19s) 8s ago 8m 91.1M - 19.2.3-678-ge911bdeb 654f31e6858e 9a13050e9ad3 2026-03-09T14:42:02.923 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (14s) 8s ago 8m 92.9M - 19.2.3-678-ge911bdeb 654f31e6858e 3dd8df0c45b8 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 17 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:42:03.153 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:42:03.342 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:42:03.342 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:42:03.342 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:42:03.342 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:42:03.342 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T14:42:03.342 INFO:teuthology.orchestra.run.vm07.stdout: "rgw", 2026-03-09T14:42:03.342 INFO:teuthology.orchestra.run.vm07.stdout: "osd", 2026-03-09T14:42:03.342 INFO:teuthology.orchestra.run.vm07.stdout: "mgr", 2026-03-09T14:42:03.342 INFO:teuthology.orchestra.run.vm07.stdout: "mon" 2026-03-09T14:42:03.343 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:42:03.343 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "17/23 daemons upgraded", 2026-03-09T14:42:03.343 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading iscsi daemons", 2026-03-09T14:42:03.343 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:42:03.343 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:42:03.597 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:42:03.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:03 vm11 bash[43577]: audit 2026-03-09T14:42:03.162536+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.107:0/1863816520' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:03.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:03 vm11 bash[43577]: audit 2026-03-09T14:42:03.162536+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.107:0/1863816520' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:03.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:03 vm07 bash[55244]: audit 2026-03-09T14:42:03.162536+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.107:0/1863816520' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:03.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:03 vm07 bash[55244]: audit 2026-03-09T14:42:03.162536+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.107:0/1863816520' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:03.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:03 vm07 bash[56315]: audit 2026-03-09T14:42:03.162536+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.107:0/1863816520' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:03.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:03 vm07 bash[56315]: audit 2026-03-09T14:42:03.162536+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.107:0/1863816520' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:03.904 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:42:03 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:42:03] "GET /metrics HTTP/1.1" 200 38190 "" "Prometheus/2.51.0" 2026-03-09T14:42:04.451 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:42:04 vm11 bash[41290]: ts=2026-03-09T14:42:04.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: audit 2026-03-09T14:42:02.544496+0000 mgr.y (mgr.44103) 241 : audit [DBG] from='client.44409 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: audit 2026-03-09T14:42:02.544496+0000 mgr.y (mgr.44103) 241 : audit [DBG] from='client.44409 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: cluster 2026-03-09T14:42:02.563677+0000 mgr.y (mgr.44103) 242 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 102 KiB/s rd, 85 B/s wr, 157 op/s 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: cluster 2026-03-09T14:42:02.563677+0000 mgr.y (mgr.44103) 242 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 102 KiB/s rd, 85 B/s wr, 157 op/s 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: audit 2026-03-09T14:42:02.737277+0000 mgr.y (mgr.44103) 243 : audit [DBG] from='client.34414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: audit 2026-03-09T14:42:02.737277+0000 mgr.y (mgr.44103) 243 : audit [DBG] from='client.34414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: audit 2026-03-09T14:42:02.927352+0000 mgr.y (mgr.44103) 244 : audit [DBG] from='client.44421 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: audit 2026-03-09T14:42:02.927352+0000 mgr.y (mgr.44103) 244 : audit [DBG] from='client.44421 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: audit 2026-03-09T14:42:03.351261+0000 mgr.y (mgr.44103) 245 : audit [DBG] from='client.34432 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: audit 2026-03-09T14:42:03.351261+0000 mgr.y (mgr.44103) 245 : audit [DBG] from='client.34432 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: audit 2026-03-09T14:42:03.606298+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.107:0/1130068495' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:04.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:04 vm11 bash[43577]: audit 2026-03-09T14:42:03.606298+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.107:0/1130068495' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: audit 2026-03-09T14:42:02.544496+0000 mgr.y (mgr.44103) 241 : audit [DBG] from='client.44409 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: audit 2026-03-09T14:42:02.544496+0000 mgr.y (mgr.44103) 241 : audit [DBG] from='client.44409 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: cluster 2026-03-09T14:42:02.563677+0000 mgr.y (mgr.44103) 242 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 102 KiB/s rd, 85 B/s wr, 157 op/s 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: cluster 2026-03-09T14:42:02.563677+0000 mgr.y (mgr.44103) 242 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 102 KiB/s rd, 85 B/s wr, 157 op/s 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: audit 2026-03-09T14:42:02.737277+0000 mgr.y (mgr.44103) 243 : audit [DBG] from='client.34414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: audit 2026-03-09T14:42:02.737277+0000 mgr.y (mgr.44103) 243 : audit [DBG] from='client.34414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: audit 2026-03-09T14:42:02.927352+0000 mgr.y (mgr.44103) 244 : audit [DBG] from='client.44421 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: audit 2026-03-09T14:42:02.927352+0000 mgr.y (mgr.44103) 244 : audit [DBG] from='client.44421 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: audit 2026-03-09T14:42:03.351261+0000 mgr.y (mgr.44103) 245 : audit [DBG] from='client.34432 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: audit 2026-03-09T14:42:03.351261+0000 mgr.y (mgr.44103) 245 : audit [DBG] from='client.34432 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: audit 2026-03-09T14:42:03.606298+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.107:0/1130068495' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:04 vm07 bash[55244]: audit 2026-03-09T14:42:03.606298+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.107:0/1130068495' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: audit 2026-03-09T14:42:02.544496+0000 mgr.y (mgr.44103) 241 : audit [DBG] from='client.44409 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: audit 2026-03-09T14:42:02.544496+0000 mgr.y (mgr.44103) 241 : audit [DBG] from='client.44409 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: cluster 2026-03-09T14:42:02.563677+0000 mgr.y (mgr.44103) 242 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 102 KiB/s rd, 85 B/s wr, 157 op/s 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: cluster 2026-03-09T14:42:02.563677+0000 mgr.y (mgr.44103) 242 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 102 KiB/s rd, 85 B/s wr, 157 op/s 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: audit 2026-03-09T14:42:02.737277+0000 mgr.y (mgr.44103) 243 : audit [DBG] from='client.34414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: audit 2026-03-09T14:42:02.737277+0000 mgr.y (mgr.44103) 243 : audit [DBG] from='client.34414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: audit 2026-03-09T14:42:02.927352+0000 mgr.y (mgr.44103) 244 : audit [DBG] from='client.44421 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: audit 2026-03-09T14:42:02.927352+0000 mgr.y (mgr.44103) 244 : audit [DBG] from='client.44421 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: audit 2026-03-09T14:42:03.351261+0000 mgr.y (mgr.44103) 245 : audit [DBG] from='client.34432 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: audit 2026-03-09T14:42:03.351261+0000 mgr.y (mgr.44103) 245 : audit [DBG] from='client.34432 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: audit 2026-03-09T14:42:03.606298+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.107:0/1130068495' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:04.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:04 vm07 bash[56315]: audit 2026-03-09T14:42:03.606298+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.107:0/1130068495' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:06.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:06 vm11 bash[43577]: cluster 2026-03-09T14:42:04.564100+0000 mgr.y (mgr.44103) 246 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 85 B/s wr, 41 op/s 2026-03-09T14:42:06.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:06 vm11 bash[43577]: cluster 2026-03-09T14:42:04.564100+0000 mgr.y (mgr.44103) 246 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 85 B/s wr, 41 op/s 2026-03-09T14:42:06.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:06 vm07 bash[55244]: cluster 2026-03-09T14:42:04.564100+0000 mgr.y (mgr.44103) 246 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 85 B/s wr, 41 op/s 2026-03-09T14:42:06.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:06 vm07 bash[55244]: cluster 2026-03-09T14:42:04.564100+0000 mgr.y (mgr.44103) 246 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 85 B/s wr, 41 op/s 2026-03-09T14:42:06.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:06 vm07 bash[56315]: cluster 2026-03-09T14:42:04.564100+0000 mgr.y (mgr.44103) 246 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 85 B/s wr, 41 op/s 2026-03-09T14:42:06.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:06 vm07 bash[56315]: cluster 2026-03-09T14:42:04.564100+0000 mgr.y (mgr.44103) 246 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 85 B/s wr, 41 op/s 2026-03-09T14:42:07.252 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:42:06 vm11 bash[41290]: ts=2026-03-09T14:42:06.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:42:08.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:08 vm11 bash[43577]: cluster 2026-03-09T14:42:06.564496+0000 mgr.y (mgr.44103) 247 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:08.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:08 vm11 bash[43577]: cluster 2026-03-09T14:42:06.564496+0000 mgr.y (mgr.44103) 247 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:08.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:08 vm11 bash[43577]: audit 2026-03-09T14:42:07.575757+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:08.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:08 vm11 bash[43577]: audit 2026-03-09T14:42:07.575757+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:08.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:08 vm07 bash[55244]: cluster 2026-03-09T14:42:06.564496+0000 mgr.y (mgr.44103) 247 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:08.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:08 vm07 bash[55244]: cluster 2026-03-09T14:42:06.564496+0000 mgr.y (mgr.44103) 247 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:08.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:08 vm07 bash[55244]: audit 2026-03-09T14:42:07.575757+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:08.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:08 vm07 bash[55244]: audit 2026-03-09T14:42:07.575757+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:08.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:08 vm07 bash[56315]: cluster 2026-03-09T14:42:06.564496+0000 mgr.y (mgr.44103) 247 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:08.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:08 vm07 bash[56315]: cluster 2026-03-09T14:42:06.564496+0000 mgr.y (mgr.44103) 247 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:08.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:08 vm07 bash[56315]: audit 2026-03-09T14:42:07.575757+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:08.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:08 vm07 bash[56315]: audit 2026-03-09T14:42:07.575757+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:09.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:09 vm11 bash[43577]: audit 2026-03-09T14:42:07.595078+0000 mgr.y (mgr.44103) 248 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:09.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:09 vm11 bash[43577]: audit 2026-03-09T14:42:07.595078+0000 mgr.y (mgr.44103) 248 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:09.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:09 vm07 bash[55244]: audit 2026-03-09T14:42:07.595078+0000 mgr.y (mgr.44103) 248 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:09.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:09 vm07 bash[55244]: audit 2026-03-09T14:42:07.595078+0000 mgr.y (mgr.44103) 248 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:09 vm07 bash[56315]: audit 2026-03-09T14:42:07.595078+0000 mgr.y (mgr.44103) 248 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:09.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:09 vm07 bash[56315]: audit 2026-03-09T14:42:07.595078+0000 mgr.y (mgr.44103) 248 : audit [DBG] from='client.15153 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:10.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:10 vm11 bash[43577]: cluster 2026-03-09T14:42:08.565098+0000 mgr.y (mgr.44103) 249 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:10.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:10 vm11 bash[43577]: cluster 2026-03-09T14:42:08.565098+0000 mgr.y (mgr.44103) 249 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:10 vm07 bash[55244]: cluster 2026-03-09T14:42:08.565098+0000 mgr.y (mgr.44103) 249 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:10.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:10 vm07 bash[55244]: cluster 2026-03-09T14:42:08.565098+0000 mgr.y (mgr.44103) 249 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:10.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:10 vm07 bash[56315]: cluster 2026-03-09T14:42:08.565098+0000 mgr.y (mgr.44103) 249 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:10.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:10 vm07 bash[56315]: cluster 2026-03-09T14:42:08.565098+0000 mgr.y (mgr.44103) 249 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:11.715 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:11 vm07 bash[55244]: cluster 2026-03-09T14:42:10.565496+0000 mgr.y (mgr.44103) 250 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:11.715 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:11 vm07 bash[55244]: cluster 2026-03-09T14:42:10.565496+0000 mgr.y (mgr.44103) 250 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:11.715 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:11 vm07 bash[56315]: cluster 2026-03-09T14:42:10.565496+0000 mgr.y (mgr.44103) 250 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:11.715 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:11 vm07 bash[56315]: cluster 2026-03-09T14:42:10.565496+0000 mgr.y (mgr.44103) 250 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:12.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:11 vm11 bash[43577]: cluster 2026-03-09T14:42:10.565496+0000 mgr.y (mgr.44103) 250 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:12.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:11 vm11 bash[43577]: cluster 2026-03-09T14:42:10.565496+0000 mgr.y (mgr.44103) 250 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:12.108 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:11 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:12.108 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:42:11 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:12.108 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:11 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:12.109 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:42:11 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:12.109 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:42:11 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:12.109 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:42:11 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:12.109 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:42:11 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:12.109 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:42:11 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:12.109 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:42:11 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:12 vm07 bash[55244]: audit 2026-03-09T14:42:11.816531+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:12 vm07 bash[55244]: audit 2026-03-09T14:42:11.816531+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:12 vm07 bash[55244]: audit 2026-03-09T14:42:11.823046+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:12 vm07 bash[55244]: audit 2026-03-09T14:42:11.823046+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:12 vm07 bash[55244]: audit 2026-03-09T14:42:12.276328+0000 mon.a (mon.0) 621 : audit [DBG] from='client.? 192.168.123.107:0/3141447161' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:12 vm07 bash[55244]: audit 2026-03-09T14:42:12.276328+0000 mon.a (mon.0) 621 : audit [DBG] from='client.? 192.168.123.107:0/3141447161' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:12 vm07 bash[55244]: audit 2026-03-09T14:42:12.428740+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.107:0/2163114995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:12 vm07 bash[55244]: audit 2026-03-09T14:42:12.428740+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.107:0/2163114995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:12 vm07 bash[55244]: audit 2026-03-09T14:42:12.429084+0000 mon.a (mon.0) 622 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:12 vm07 bash[55244]: audit 2026-03-09T14:42:12.429084+0000 mon.a (mon.0) 622 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:12 vm07 bash[56315]: audit 2026-03-09T14:42:11.816531+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:12 vm07 bash[56315]: audit 2026-03-09T14:42:11.816531+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:12 vm07 bash[56315]: audit 2026-03-09T14:42:11.823046+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:12 vm07 bash[56315]: audit 2026-03-09T14:42:11.823046+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:12 vm07 bash[56315]: audit 2026-03-09T14:42:12.276328+0000 mon.a (mon.0) 621 : audit [DBG] from='client.? 192.168.123.107:0/3141447161' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:12 vm07 bash[56315]: audit 2026-03-09T14:42:12.276328+0000 mon.a (mon.0) 621 : audit [DBG] from='client.? 192.168.123.107:0/3141447161' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:12 vm07 bash[56315]: audit 2026-03-09T14:42:12.428740+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.107:0/2163114995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:12 vm07 bash[56315]: audit 2026-03-09T14:42:12.428740+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.107:0/2163114995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:12 vm07 bash[56315]: audit 2026-03-09T14:42:12.429084+0000 mon.a (mon.0) 622 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:12 vm07 bash[56315]: audit 2026-03-09T14:42:12.429084+0000 mon.a (mon.0) 622 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:12 vm11 bash[43577]: audit 2026-03-09T14:42:11.816531+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:12 vm11 bash[43577]: audit 2026-03-09T14:42:11.816531+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:12 vm11 bash[43577]: audit 2026-03-09T14:42:11.823046+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:12 vm11 bash[43577]: audit 2026-03-09T14:42:11.823046+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:12 vm11 bash[43577]: audit 2026-03-09T14:42:12.276328+0000 mon.a (mon.0) 621 : audit [DBG] from='client.? 192.168.123.107:0/3141447161' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:42:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:12 vm11 bash[43577]: audit 2026-03-09T14:42:12.276328+0000 mon.a (mon.0) 621 : audit [DBG] from='client.? 192.168.123.107:0/3141447161' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:42:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:12 vm11 bash[43577]: audit 2026-03-09T14:42:12.428740+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.107:0/2163114995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:12 vm11 bash[43577]: audit 2026-03-09T14:42:12.428740+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.107:0/2163114995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:12 vm11 bash[43577]: audit 2026-03-09T14:42:12.429084+0000 mon.a (mon.0) 622 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:12 vm11 bash[43577]: audit 2026-03-09T14:42:12.429084+0000 mon.a (mon.0) 622 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]: dispatch 2026-03-09T14:42:13.826 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:42:13 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:42:13] "GET /metrics HTTP/1.1" 200 38255 "" "Prometheus/2.51.0" 2026-03-09T14:42:14.142 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:13 vm11 bash[43577]: cluster 2026-03-09T14:42:12.565893+0000 mgr.y (mgr.44103) 251 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:14.142 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:13 vm11 bash[43577]: cluster 2026-03-09T14:42:12.565893+0000 mgr.y (mgr.44103) 251 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:14.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:13 vm11 bash[43577]: audit 2026-03-09T14:42:12.827804+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]': finished 2026-03-09T14:42:14.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:13 vm11 bash[43577]: audit 2026-03-09T14:42:12.827804+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]': finished 2026-03-09T14:42:14.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:13 vm11 bash[43577]: cluster 2026-03-09T14:42:12.832692+0000 mon.a (mon.0) 624 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T14:42:14.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:13 vm11 bash[43577]: cluster 2026-03-09T14:42:12.832692+0000 mon.a (mon.0) 624 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T14:42:14.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:13 vm11 bash[43577]: audit 2026-03-09T14:42:12.987149+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/1081676995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:13 vm11 bash[43577]: audit 2026-03-09T14:42:12.987149+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/1081676995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:13 vm11 bash[43577]: audit 2026-03-09T14:42:12.987476+0000 mon.a (mon.0) 625 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.143 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:13 vm11 bash[43577]: audit 2026-03-09T14:42:12.987476+0000 mon.a (mon.0) 625 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:13 vm07 bash[55244]: cluster 2026-03-09T14:42:12.565893+0000 mgr.y (mgr.44103) 251 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:13 vm07 bash[55244]: cluster 2026-03-09T14:42:12.565893+0000 mgr.y (mgr.44103) 251 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:13 vm07 bash[55244]: audit 2026-03-09T14:42:12.827804+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]': finished 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:13 vm07 bash[55244]: audit 2026-03-09T14:42:12.827804+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]': finished 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:13 vm07 bash[55244]: cluster 2026-03-09T14:42:12.832692+0000 mon.a (mon.0) 624 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:13 vm07 bash[55244]: cluster 2026-03-09T14:42:12.832692+0000 mon.a (mon.0) 624 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:13 vm07 bash[55244]: audit 2026-03-09T14:42:12.987149+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/1081676995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:13 vm07 bash[55244]: audit 2026-03-09T14:42:12.987149+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/1081676995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:13 vm07 bash[55244]: audit 2026-03-09T14:42:12.987476+0000 mon.a (mon.0) 625 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:13 vm07 bash[55244]: audit 2026-03-09T14:42:12.987476+0000 mon.a (mon.0) 625 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:13 vm07 bash[56315]: cluster 2026-03-09T14:42:12.565893+0000 mgr.y (mgr.44103) 251 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:13 vm07 bash[56315]: cluster 2026-03-09T14:42:12.565893+0000 mgr.y (mgr.44103) 251 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:13 vm07 bash[56315]: audit 2026-03-09T14:42:12.827804+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]': finished 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:13 vm07 bash[56315]: audit 2026-03-09T14:42:12.827804+0000 mon.a (mon.0) 623 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6801/3556397736"}]': finished 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:13 vm07 bash[56315]: cluster 2026-03-09T14:42:12.832692+0000 mon.a (mon.0) 624 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:13 vm07 bash[56315]: cluster 2026-03-09T14:42:12.832692+0000 mon.a (mon.0) 624 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:13 vm07 bash[56315]: audit 2026-03-09T14:42:12.987149+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/1081676995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:13 vm07 bash[56315]: audit 2026-03-09T14:42:12.987149+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.107:0/1081676995' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:13 vm07 bash[56315]: audit 2026-03-09T14:42:12.987476+0000 mon.a (mon.0) 625 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:13 vm07 bash[56315]: audit 2026-03-09T14:42:12.987476+0000 mon.a (mon.0) 625 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]: dispatch 2026-03-09T14:42:14.502 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:42:14 vm11 bash[41290]: ts=2026-03-09T14:42:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:14 vm07 bash[55244]: audit 2026-03-09T14:42:13.830554+0000 mon.a (mon.0) 626 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]': finished 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:14 vm07 bash[55244]: audit 2026-03-09T14:42:13.830554+0000 mon.a (mon.0) 626 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]': finished 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:14 vm07 bash[55244]: cluster 2026-03-09T14:42:13.832922+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:14 vm07 bash[55244]: cluster 2026-03-09T14:42:13.832922+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:14 vm07 bash[55244]: audit 2026-03-09T14:42:13.997482+0000 mon.a (mon.0) 628 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]: dispatch 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:14 vm07 bash[55244]: audit 2026-03-09T14:42:13.997482+0000 mon.a (mon.0) 628 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]: dispatch 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:14 vm07 bash[56315]: audit 2026-03-09T14:42:13.830554+0000 mon.a (mon.0) 626 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]': finished 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:14 vm07 bash[56315]: audit 2026-03-09T14:42:13.830554+0000 mon.a (mon.0) 626 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]': finished 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:14 vm07 bash[56315]: cluster 2026-03-09T14:42:13.832922+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:14 vm07 bash[56315]: cluster 2026-03-09T14:42:13.832922+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:14 vm07 bash[56315]: audit 2026-03-09T14:42:13.997482+0000 mon.a (mon.0) 628 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]: dispatch 2026-03-09T14:42:15.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:14 vm07 bash[56315]: audit 2026-03-09T14:42:13.997482+0000 mon.a (mon.0) 628 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]: dispatch 2026-03-09T14:42:15.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:14 vm11 bash[43577]: audit 2026-03-09T14:42:13.830554+0000 mon.a (mon.0) 626 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]': finished 2026-03-09T14:42:15.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:14 vm11 bash[43577]: audit 2026-03-09T14:42:13.830554+0000 mon.a (mon.0) 626 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/2207382656"}]': finished 2026-03-09T14:42:15.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:14 vm11 bash[43577]: cluster 2026-03-09T14:42:13.832922+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T14:42:15.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:14 vm11 bash[43577]: cluster 2026-03-09T14:42:13.832922+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-09T14:42:15.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:14 vm11 bash[43577]: audit 2026-03-09T14:42:13.997482+0000 mon.a (mon.0) 628 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]: dispatch 2026-03-09T14:42:15.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:14 vm11 bash[43577]: audit 2026-03-09T14:42:13.997482+0000 mon.a (mon.0) 628 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]: dispatch 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:15 vm07 bash[55244]: cluster 2026-03-09T14:42:14.566119+0000 mgr.y (mgr.44103) 252 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:15 vm07 bash[55244]: cluster 2026-03-09T14:42:14.566119+0000 mgr.y (mgr.44103) 252 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:15 vm07 bash[55244]: audit 2026-03-09T14:42:14.841925+0000 mon.a (mon.0) 629 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]': finished 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:15 vm07 bash[55244]: audit 2026-03-09T14:42:14.841925+0000 mon.a (mon.0) 629 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]': finished 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:15 vm07 bash[55244]: cluster 2026-03-09T14:42:14.845651+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:15 vm07 bash[55244]: cluster 2026-03-09T14:42:14.845651+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:15 vm07 bash[55244]: audit 2026-03-09T14:42:15.005819+0000 mon.a (mon.0) 631 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]: dispatch 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:15 vm07 bash[55244]: audit 2026-03-09T14:42:15.005819+0000 mon.a (mon.0) 631 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]: dispatch 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:15 vm07 bash[56315]: cluster 2026-03-09T14:42:14.566119+0000 mgr.y (mgr.44103) 252 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:15 vm07 bash[56315]: cluster 2026-03-09T14:42:14.566119+0000 mgr.y (mgr.44103) 252 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:15 vm07 bash[56315]: audit 2026-03-09T14:42:14.841925+0000 mon.a (mon.0) 629 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]': finished 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:15 vm07 bash[56315]: audit 2026-03-09T14:42:14.841925+0000 mon.a (mon.0) 629 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]': finished 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:15 vm07 bash[56315]: cluster 2026-03-09T14:42:14.845651+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:15 vm07 bash[56315]: cluster 2026-03-09T14:42:14.845651+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:15 vm07 bash[56315]: audit 2026-03-09T14:42:15.005819+0000 mon.a (mon.0) 631 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]: dispatch 2026-03-09T14:42:16.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:15 vm07 bash[56315]: audit 2026-03-09T14:42:15.005819+0000 mon.a (mon.0) 631 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]: dispatch 2026-03-09T14:42:16.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:15 vm11 bash[43577]: cluster 2026-03-09T14:42:14.566119+0000 mgr.y (mgr.44103) 252 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:16.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:15 vm11 bash[43577]: cluster 2026-03-09T14:42:14.566119+0000 mgr.y (mgr.44103) 252 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 290 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:16.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:15 vm11 bash[43577]: audit 2026-03-09T14:42:14.841925+0000 mon.a (mon.0) 629 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]': finished 2026-03-09T14:42:16.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:15 vm11 bash[43577]: audit 2026-03-09T14:42:14.841925+0000 mon.a (mon.0) 629 : audit [INF] from='client.? 192.168.123.107:0/2727259402' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/3677113900"}]': finished 2026-03-09T14:42:16.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:15 vm11 bash[43577]: cluster 2026-03-09T14:42:14.845651+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T14:42:16.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:15 vm11 bash[43577]: cluster 2026-03-09T14:42:14.845651+0000 mon.a (mon.0) 630 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-09T14:42:16.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:15 vm11 bash[43577]: audit 2026-03-09T14:42:15.005819+0000 mon.a (mon.0) 631 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]: dispatch 2026-03-09T14:42:16.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:15 vm11 bash[43577]: audit 2026-03-09T14:42:15.005819+0000 mon.a (mon.0) 631 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]: dispatch 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:16 vm07 bash[56315]: audit 2026-03-09T14:42:15.850795+0000 mon.a (mon.0) 632 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]': finished 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:16 vm07 bash[56315]: audit 2026-03-09T14:42:15.850795+0000 mon.a (mon.0) 632 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]': finished 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:16 vm07 bash[56315]: cluster 2026-03-09T14:42:15.855381+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:16 vm07 bash[56315]: cluster 2026-03-09T14:42:15.855381+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:16 vm07 bash[56315]: audit 2026-03-09T14:42:16.011706+0000 mon.a (mon.0) 634 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]: dispatch 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:16 vm07 bash[56315]: audit 2026-03-09T14:42:16.011706+0000 mon.a (mon.0) 634 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]: dispatch 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:16 vm07 bash[55244]: audit 2026-03-09T14:42:15.850795+0000 mon.a (mon.0) 632 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]': finished 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:16 vm07 bash[55244]: audit 2026-03-09T14:42:15.850795+0000 mon.a (mon.0) 632 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]': finished 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:16 vm07 bash[55244]: cluster 2026-03-09T14:42:15.855381+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:16 vm07 bash[55244]: cluster 2026-03-09T14:42:15.855381+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:16 vm07 bash[55244]: audit 2026-03-09T14:42:16.011706+0000 mon.a (mon.0) 634 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]: dispatch 2026-03-09T14:42:17.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:16 vm07 bash[55244]: audit 2026-03-09T14:42:16.011706+0000 mon.a (mon.0) 634 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]: dispatch 2026-03-09T14:42:17.252 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:42:16 vm11 bash[41290]: ts=2026-03-09T14:42:16.949Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:42:17.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:16 vm11 bash[43577]: audit 2026-03-09T14:42:15.850795+0000 mon.a (mon.0) 632 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]': finished 2026-03-09T14:42:17.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:16 vm11 bash[43577]: audit 2026-03-09T14:42:15.850795+0000 mon.a (mon.0) 632 : audit [INF] from='client.? 192.168.123.107:0/3531606599' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/1950711"}]': finished 2026-03-09T14:42:17.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:16 vm11 bash[43577]: cluster 2026-03-09T14:42:15.855381+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T14:42:17.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:16 vm11 bash[43577]: cluster 2026-03-09T14:42:15.855381+0000 mon.a (mon.0) 633 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-09T14:42:17.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:16 vm11 bash[43577]: audit 2026-03-09T14:42:16.011706+0000 mon.a (mon.0) 634 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]: dispatch 2026-03-09T14:42:17.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:16 vm11 bash[43577]: audit 2026-03-09T14:42:16.011706+0000 mon.a (mon.0) 634 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]: dispatch 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: cluster 2026-03-09T14:42:16.566460+0000 mgr.y (mgr.44103) 253 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: cluster 2026-03-09T14:42:16.566460+0000 mgr.y (mgr.44103) 253 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:16.860115+0000 mon.a (mon.0) 635 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]': finished 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:16.860115+0000 mon.a (mon.0) 635 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]': finished 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: cluster 2026-03-09T14:42:16.871453+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: cluster 2026-03-09T14:42:16.871453+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.058602+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.058602+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.069040+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.069040+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.104099+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]: dispatch 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.104099+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]: dispatch 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.182427+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.182427+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.187917+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.187917+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.770445+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.770445+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.777309+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:17 vm07 bash[55244]: audit 2026-03-09T14:42:17.777309+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: cluster 2026-03-09T14:42:16.566460+0000 mgr.y (mgr.44103) 253 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: cluster 2026-03-09T14:42:16.566460+0000 mgr.y (mgr.44103) 253 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:16.860115+0000 mon.a (mon.0) 635 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]': finished 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:16.860115+0000 mon.a (mon.0) 635 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]': finished 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: cluster 2026-03-09T14:42:16.871453+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: cluster 2026-03-09T14:42:16.871453+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.058602+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.058602+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.069040+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.069040+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.104099+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]: dispatch 2026-03-09T14:42:18.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.104099+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]: dispatch 2026-03-09T14:42:18.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.182427+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.182427+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.187917+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.187917+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.770445+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.770445+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.777309+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:17 vm07 bash[56315]: audit 2026-03-09T14:42:17.777309+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: cluster 2026-03-09T14:42:16.566460+0000 mgr.y (mgr.44103) 253 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: cluster 2026-03-09T14:42:16.566460+0000 mgr.y (mgr.44103) 253 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:16.860115+0000 mon.a (mon.0) 635 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]': finished 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:16.860115+0000 mon.a (mon.0) 635 : audit [INF] from='client.? 192.168.123.107:0/1388062184' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:0/253675067"}]': finished 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: cluster 2026-03-09T14:42:16.871453+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: cluster 2026-03-09T14:42:16.871453+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.058602+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.058602+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.069040+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.069040+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.104099+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]: dispatch 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.104099+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]: dispatch 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.182427+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.182427+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.187917+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.187917+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.770445+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.770445+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.777309+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:17 vm11 bash[43577]: audit 2026-03-09T14:42:17.777309+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:19.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:19 vm07 bash[55244]: audit 2026-03-09T14:42:18.069918+0000 mon.a (mon.0) 644 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]': finished 2026-03-09T14:42:19.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:19 vm07 bash[55244]: audit 2026-03-09T14:42:18.069918+0000 mon.a (mon.0) 644 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]': finished 2026-03-09T14:42:19.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:19 vm07 bash[55244]: cluster 2026-03-09T14:42:18.090405+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T14:42:19.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:19 vm07 bash[55244]: cluster 2026-03-09T14:42:18.090405+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T14:42:19.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:19 vm07 bash[56315]: audit 2026-03-09T14:42:18.069918+0000 mon.a (mon.0) 644 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]': finished 2026-03-09T14:42:19.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:19 vm07 bash[56315]: audit 2026-03-09T14:42:18.069918+0000 mon.a (mon.0) 644 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]': finished 2026-03-09T14:42:19.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:19 vm07 bash[56315]: cluster 2026-03-09T14:42:18.090405+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T14:42:19.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:19 vm07 bash[56315]: cluster 2026-03-09T14:42:18.090405+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T14:42:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:19 vm11 bash[43577]: audit 2026-03-09T14:42:18.069918+0000 mon.a (mon.0) 644 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]': finished 2026-03-09T14:42:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:19 vm11 bash[43577]: audit 2026-03-09T14:42:18.069918+0000 mon.a (mon.0) 644 : audit [INF] from='client.? 192.168.123.107:0/3045845839' entity='client.iscsi.foo.vm07.ohlmos' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.107:6800/3556397736"}]': finished 2026-03-09T14:42:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:19 vm11 bash[43577]: cluster 2026-03-09T14:42:18.090405+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T14:42:19.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:19 vm11 bash[43577]: cluster 2026-03-09T14:42:18.090405+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-09T14:42:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:20 vm07 bash[55244]: cluster 2026-03-09T14:42:18.566901+0000 mgr.y (mgr.44103) 254 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:20 vm07 bash[55244]: cluster 2026-03-09T14:42:18.566901+0000 mgr.y (mgr.44103) 254 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:20.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:20 vm07 bash[56315]: cluster 2026-03-09T14:42:18.566901+0000 mgr.y (mgr.44103) 254 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:20.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:20 vm07 bash[56315]: cluster 2026-03-09T14:42:18.566901+0000 mgr.y (mgr.44103) 254 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:20.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:20 vm11 bash[43577]: cluster 2026-03-09T14:42:18.566901+0000 mgr.y (mgr.44103) 254 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:20.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:20 vm11 bash[43577]: cluster 2026-03-09T14:42:18.566901+0000 mgr.y (mgr.44103) 254 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:22.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:22 vm07 bash[55244]: cluster 2026-03-09T14:42:20.567673+0000 mgr.y (mgr.44103) 255 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 894 B/s rd, 0 op/s 2026-03-09T14:42:22.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:22 vm07 bash[55244]: cluster 2026-03-09T14:42:20.567673+0000 mgr.y (mgr.44103) 255 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 894 B/s rd, 0 op/s 2026-03-09T14:42:22.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:22 vm07 bash[56315]: cluster 2026-03-09T14:42:20.567673+0000 mgr.y (mgr.44103) 255 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 894 B/s rd, 0 op/s 2026-03-09T14:42:22.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:22 vm07 bash[56315]: cluster 2026-03-09T14:42:20.567673+0000 mgr.y (mgr.44103) 255 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 894 B/s rd, 0 op/s 2026-03-09T14:42:22.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:22 vm11 bash[43577]: cluster 2026-03-09T14:42:20.567673+0000 mgr.y (mgr.44103) 255 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 894 B/s rd, 0 op/s 2026-03-09T14:42:22.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:22 vm11 bash[43577]: cluster 2026-03-09T14:42:20.567673+0000 mgr.y (mgr.44103) 255 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 894 B/s rd, 0 op/s 2026-03-09T14:42:23.348 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:23 vm07 bash[55244]: audit 2026-03-09T14:42:22.137950+0000 mgr.y (mgr.44103) 256 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:23.348 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:23 vm07 bash[55244]: audit 2026-03-09T14:42:22.137950+0000 mgr.y (mgr.44103) 256 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:23.348 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:23 vm07 bash[55244]: audit 2026-03-09T14:42:22.575876+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:23.348 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:23 vm07 bash[55244]: audit 2026-03-09T14:42:22.575876+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:23.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:23 vm07 bash[56315]: audit 2026-03-09T14:42:22.137950+0000 mgr.y (mgr.44103) 256 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:23.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:23 vm07 bash[56315]: audit 2026-03-09T14:42:22.137950+0000 mgr.y (mgr.44103) 256 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:23.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:23 vm07 bash[56315]: audit 2026-03-09T14:42:22.575876+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:23.348 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:23 vm07 bash[56315]: audit 2026-03-09T14:42:22.575876+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:23.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:23 vm11 bash[43577]: audit 2026-03-09T14:42:22.137950+0000 mgr.y (mgr.44103) 256 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:23.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:23 vm11 bash[43577]: audit 2026-03-09T14:42:22.137950+0000 mgr.y (mgr.44103) 256 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:23.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:23 vm11 bash[43577]: audit 2026-03-09T14:42:22.575876+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:23.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:23 vm11 bash[43577]: audit 2026-03-09T14:42:22.575876+0000 mon.a (mon.0) 646 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:23.654 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:42:23 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:42:23] "GET /metrics HTTP/1.1" 200 38253 "" "Prometheus/2.51.0" 2026-03-09T14:42:24.407 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: cluster 2026-03-09T14:42:22.568041+0000 mgr.y (mgr.44103) 257 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 762 B/s rd, 0 op/s 2026-03-09T14:42:24.407 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: cluster 2026-03-09T14:42:22.568041+0000 mgr.y (mgr.44103) 257 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 762 B/s rd, 0 op/s 2026-03-09T14:42:24.407 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.412319+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.407 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.412319+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.418941+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.418941+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.420710+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.420710+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.421158+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.421158+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: cephadm 2026-03-09T14:42:23.423970+0000 mgr.y (mgr.44103) 258 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: cephadm 2026-03-09T14:42:23.423970+0000 mgr.y (mgr.44103) 258 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.698782+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.698782+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.717376+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.717376+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.726642+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.726642+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.728773+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.728773+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.729966+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.729966+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.733794+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.733794+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.764318+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.764318+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.765696+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.765696+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.766729+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.766729+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.767398+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.767398+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.768748+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.768748+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.769950+0000 mon.a (mon.0) 662 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.769950+0000 mon.a (mon.0) 662 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.770926+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.770926+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.771612+0000 mon.a (mon.0) 664 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.771612+0000 mon.a (mon.0) 664 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.772278+0000 mon.a (mon.0) 665 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.772278+0000 mon.a (mon.0) 665 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.772933+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.408 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:42:24 vm11 bash[41290]: ts=2026-03-09T14:42:24.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.772933+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.773744+0000 mon.a (mon.0) 667 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.773744+0000 mon.a (mon.0) 667 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.777572+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.777572+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.780862+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]: dispatch 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.780862+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]: dispatch 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.783377+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]': finished 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.783377+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]': finished 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.787656+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.787656+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.791002+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.791002+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.793341+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.793341+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.796565+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.796565+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.798266+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.798266+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.799256+0000 mon.a (mon.0) 676 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.799256+0000 mon.a (mon.0) 676 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.800161+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.753 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:24 vm11 bash[43577]: audit 2026-03-09T14:42:23.800161+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: cluster 2026-03-09T14:42:22.568041+0000 mgr.y (mgr.44103) 257 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 762 B/s rd, 0 op/s 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: cluster 2026-03-09T14:42:22.568041+0000 mgr.y (mgr.44103) 257 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 762 B/s rd, 0 op/s 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.412319+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.412319+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.418941+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.418941+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.420710+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.420710+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.421158+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.421158+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: cephadm 2026-03-09T14:42:23.423970+0000 mgr.y (mgr.44103) 258 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: cephadm 2026-03-09T14:42:23.423970+0000 mgr.y (mgr.44103) 258 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.698782+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.698782+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.717376+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.717376+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.726642+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.726642+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.728773+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.728773+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:24.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.729966+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.729966+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.733794+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.733794+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.764318+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.764318+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.765696+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.765696+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.766729+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.766729+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.767398+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.767398+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.768748+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.768748+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.769950+0000 mon.a (mon.0) 662 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.769950+0000 mon.a (mon.0) 662 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.770926+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.770926+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.771612+0000 mon.a (mon.0) 664 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.771612+0000 mon.a (mon.0) 664 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.772278+0000 mon.a (mon.0) 665 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.772278+0000 mon.a (mon.0) 665 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.772933+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.772933+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.773744+0000 mon.a (mon.0) 667 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.773744+0000 mon.a (mon.0) 667 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.777572+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.777572+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.780862+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.780862+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.783377+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]': finished 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.783377+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]': finished 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.787656+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.787656+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.791002+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.791002+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.793341+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.793341+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.796565+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.796565+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.798266+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.798266+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.799256+0000 mon.a (mon.0) 676 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.799256+0000 mon.a (mon.0) 676 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.800161+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:24 vm07 bash[55244]: audit 2026-03-09T14:42:23.800161+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: cluster 2026-03-09T14:42:22.568041+0000 mgr.y (mgr.44103) 257 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 762 B/s rd, 0 op/s 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: cluster 2026-03-09T14:42:22.568041+0000 mgr.y (mgr.44103) 257 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 762 B/s rd, 0 op/s 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.412319+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.412319+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.418941+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.418941+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.420710+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:24.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.420710+0000 mon.a (mon.0) 649 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.421158+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.421158+0000 mon.a (mon.0) 650 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: cephadm 2026-03-09T14:42:23.423970+0000 mgr.y (mgr.44103) 258 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: cephadm 2026-03-09T14:42:23.423970+0000 mgr.y (mgr.44103) 258 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.698782+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.698782+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.717376+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.717376+0000 mon.a (mon.0) 652 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.726642+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.726642+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.728773+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.728773+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.729966+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.729966+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.733794+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.733794+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.764318+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.764318+0000 mon.a (mon.0) 657 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.765696+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.765696+0000 mon.a (mon.0) 658 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.766729+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.766729+0000 mon.a (mon.0) 659 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.767398+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.767398+0000 mon.a (mon.0) 660 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.768748+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.768748+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.769950+0000 mon.a (mon.0) 662 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.769950+0000 mon.a (mon.0) 662 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.770926+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.770926+0000 mon.a (mon.0) 663 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.771612+0000 mon.a (mon.0) 664 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.771612+0000 mon.a (mon.0) 664 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.772278+0000 mon.a (mon.0) 665 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.772278+0000 mon.a (mon.0) 665 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.772933+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.772933+0000 mon.a (mon.0) 666 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.773744+0000 mon.a (mon.0) 667 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.773744+0000 mon.a (mon.0) 667 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.777572+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.777572+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.780862+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.780862+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.783377+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]': finished 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.783377+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm07.ohlmos"}]': finished 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.787656+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.787656+0000 mon.a (mon.0) 671 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.791002+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.791002+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.793341+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.793341+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.796565+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.796565+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.798266+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.798266+0000 mon.a (mon.0) 675 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.799256+0000 mon.a (mon.0) 676 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.799256+0000 mon.a (mon.0) 676 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.800161+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:24.906 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:24 vm07 bash[56315]: audit 2026-03-09T14:42:23.800161+0000 mon.a (mon.0) 677 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: audit 2026-03-09T14:42:23.717950+0000 mgr.y (mgr.44103) 259 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: audit 2026-03-09T14:42:23.717950+0000 mgr.y (mgr.44103) 259 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:23.728520+0000 mgr.y (mgr.44103) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:23.728520+0000 mgr.y (mgr.44103) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: audit 2026-03-09T14:42:23.729011+0000 mgr.y (mgr.44103) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: audit 2026-03-09T14:42:23.729011+0000 mgr.y (mgr.44103) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: audit 2026-03-09T14:42:23.730204+0000 mgr.y (mgr.44103) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: audit 2026-03-09T14:42:23.730204+0000 mgr.y (mgr.44103) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:23.774132+0000 mgr.y (mgr.44103) 263 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:23.774132+0000 mgr.y (mgr.44103) 263 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:23.788298+0000 mgr.y (mgr.44103) 264 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:23.788298+0000 mgr.y (mgr.44103) 264 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:23.793793+0000 mgr.y (mgr.44103) 265 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:23.793793+0000 mgr.y (mgr.44103) 265 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:24.247147+0000 mgr.y (mgr.44103) 266 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:24.247147+0000 mgr.y (mgr.44103) 266 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:24.281519+0000 mgr.y (mgr.44103) 267 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-09T14:42:25.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:25 vm11 bash[43577]: cephadm 2026-03-09T14:42:24.281519+0000 mgr.y (mgr.44103) 267 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: audit 2026-03-09T14:42:23.717950+0000 mgr.y (mgr.44103) 259 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: audit 2026-03-09T14:42:23.717950+0000 mgr.y (mgr.44103) 259 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:23.728520+0000 mgr.y (mgr.44103) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:23.728520+0000 mgr.y (mgr.44103) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: audit 2026-03-09T14:42:23.729011+0000 mgr.y (mgr.44103) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: audit 2026-03-09T14:42:23.729011+0000 mgr.y (mgr.44103) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: audit 2026-03-09T14:42:23.730204+0000 mgr.y (mgr.44103) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: audit 2026-03-09T14:42:23.730204+0000 mgr.y (mgr.44103) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:23.774132+0000 mgr.y (mgr.44103) 263 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:23.774132+0000 mgr.y (mgr.44103) 263 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:23.788298+0000 mgr.y (mgr.44103) 264 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:23.788298+0000 mgr.y (mgr.44103) 264 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:23.793793+0000 mgr.y (mgr.44103) 265 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:23.793793+0000 mgr.y (mgr.44103) 265 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:24.247147+0000 mgr.y (mgr.44103) 266 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:24.247147+0000 mgr.y (mgr.44103) 266 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:24.281519+0000 mgr.y (mgr.44103) 267 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:25 vm07 bash[55244]: cephadm 2026-03-09T14:42:24.281519+0000 mgr.y (mgr.44103) 267 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-09T14:42:25.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: audit 2026-03-09T14:42:23.717950+0000 mgr.y (mgr.44103) 259 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: audit 2026-03-09T14:42:23.717950+0000 mgr.y (mgr.44103) 259 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:23.728520+0000 mgr.y (mgr.44103) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:23.728520+0000 mgr.y (mgr.44103) 260 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.107:5000 to Dashboard 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: audit 2026-03-09T14:42:23.729011+0000 mgr.y (mgr.44103) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: audit 2026-03-09T14:42:23.729011+0000 mgr.y (mgr.44103) 261 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: audit 2026-03-09T14:42:23.730204+0000 mgr.y (mgr.44103) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: audit 2026-03-09T14:42:23.730204+0000 mgr.y (mgr.44103) 262 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm07"}]: dispatch 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:23.774132+0000 mgr.y (mgr.44103) 263 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:23.774132+0000 mgr.y (mgr.44103) 263 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:23.788298+0000 mgr.y (mgr.44103) 264 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:23.788298+0000 mgr.y (mgr.44103) 264 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:23.793793+0000 mgr.y (mgr.44103) 265 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:23.793793+0000 mgr.y (mgr.44103) 265 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:24.247147+0000 mgr.y (mgr.44103) 266 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:24.247147+0000 mgr.y (mgr.44103) 266 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:24.281519+0000 mgr.y (mgr.44103) 267 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-09T14:42:25.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:25 vm07 bash[56315]: cephadm 2026-03-09T14:42:24.281519+0000 mgr.y (mgr.44103) 267 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-09T14:42:26.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:26 vm11 bash[43577]: cluster 2026-03-09T14:42:24.568516+0000 mgr.y (mgr.44103) 268 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:26.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:26 vm11 bash[43577]: cluster 2026-03-09T14:42:24.568516+0000 mgr.y (mgr.44103) 268 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:26.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:26 vm07 bash[55244]: cluster 2026-03-09T14:42:24.568516+0000 mgr.y (mgr.44103) 268 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:26.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:26 vm07 bash[55244]: cluster 2026-03-09T14:42:24.568516+0000 mgr.y (mgr.44103) 268 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:26.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:26 vm07 bash[56315]: cluster 2026-03-09T14:42:24.568516+0000 mgr.y (mgr.44103) 268 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:26.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:26 vm07 bash[56315]: cluster 2026-03-09T14:42:24.568516+0000 mgr.y (mgr.44103) 268 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:27.252 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:42:26 vm11 bash[41290]: ts=2026-03-09T14:42:26.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:42:28.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:28 vm11 bash[43577]: cluster 2026-03-09T14:42:26.568985+0000 mgr.y (mgr.44103) 269 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:28.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:28 vm11 bash[43577]: cluster 2026-03-09T14:42:26.568985+0000 mgr.y (mgr.44103) 269 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:28.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:28 vm07 bash[55244]: cluster 2026-03-09T14:42:26.568985+0000 mgr.y (mgr.44103) 269 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:28.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:28 vm07 bash[55244]: cluster 2026-03-09T14:42:26.568985+0000 mgr.y (mgr.44103) 269 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:28.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:28 vm07 bash[56315]: cluster 2026-03-09T14:42:26.568985+0000 mgr.y (mgr.44103) 269 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:28.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:28 vm07 bash[56315]: cluster 2026-03-09T14:42:26.568985+0000 mgr.y (mgr.44103) 269 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.0 KiB/s rd, 3 op/s 2026-03-09T14:42:30.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:30 vm11 bash[43577]: cluster 2026-03-09T14:42:28.569406+0000 mgr.y (mgr.44103) 270 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s 2026-03-09T14:42:30.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:30 vm11 bash[43577]: cluster 2026-03-09T14:42:28.569406+0000 mgr.y (mgr.44103) 270 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s 2026-03-09T14:42:30.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:30 vm07 bash[55244]: cluster 2026-03-09T14:42:28.569406+0000 mgr.y (mgr.44103) 270 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s 2026-03-09T14:42:30.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:30 vm07 bash[55244]: cluster 2026-03-09T14:42:28.569406+0000 mgr.y (mgr.44103) 270 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s 2026-03-09T14:42:30.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:30 vm07 bash[56315]: cluster 2026-03-09T14:42:28.569406+0000 mgr.y (mgr.44103) 270 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s 2026-03-09T14:42:30.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:30 vm07 bash[56315]: cluster 2026-03-09T14:42:28.569406+0000 mgr.y (mgr.44103) 270 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.2 KiB/s rd, 3 op/s 2026-03-09T14:42:31.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:31 vm07 bash[55244]: cluster 2026-03-09T14:42:30.569820+0000 mgr.y (mgr.44103) 271 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:31.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:31 vm07 bash[55244]: cluster 2026-03-09T14:42:30.569820+0000 mgr.y (mgr.44103) 271 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:31.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:31 vm07 bash[56315]: cluster 2026-03-09T14:42:30.569820+0000 mgr.y (mgr.44103) 271 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:31.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:31 vm07 bash[56315]: cluster 2026-03-09T14:42:30.569820+0000 mgr.y (mgr.44103) 271 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:32.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:31 vm11 bash[43577]: cluster 2026-03-09T14:42:30.569820+0000 mgr.y (mgr.44103) 271 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:32.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:31 vm11 bash[43577]: cluster 2026-03-09T14:42:30.569820+0000 mgr.y (mgr.44103) 271 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:32.927 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:32 vm11 bash[43577]: audit 2026-03-09T14:42:32.147773+0000 mgr.y (mgr.44103) 272 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:32.927 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:32 vm11 bash[43577]: audit 2026-03-09T14:42:32.147773+0000 mgr.y (mgr.44103) 272 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:33.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:32 vm07 bash[55244]: audit 2026-03-09T14:42:32.147773+0000 mgr.y (mgr.44103) 272 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:33.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:32 vm07 bash[55244]: audit 2026-03-09T14:42:32.147773+0000 mgr.y (mgr.44103) 272 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:32 vm07 bash[56315]: audit 2026-03-09T14:42:32.147773+0000 mgr.y (mgr.44103) 272 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:33.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:32 vm07 bash[56315]: audit 2026-03-09T14:42:32.147773+0000 mgr.y (mgr.44103) 272 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:33.236 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:42:32 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.236 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:42:32 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.236 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:42:32 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.236 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:42:32 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.236 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:42:32 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.236 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:42:32 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.236 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:32 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.236 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: Stopping Ceph grafana.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:42:33.237 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[39430]: t=2026-03-09T14:42:33+0000 lvl=info msg="Shutdown started" logger=server reason="System signal: terminated" 2026-03-09T14:42:33.237 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59122]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-grafana-a 2026-03-09T14:42:33.237 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@grafana.a.service: Deactivated successfully. 2026-03-09T14:42:33.237 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: Stopped Ceph grafana.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:42:33.237 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:42:32 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.237 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:32 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.502 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.503 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.503 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.503 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.503 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.503 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.503 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.503 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:42:33.503 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 systemd[1]: Started Ceph grafana.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705002542Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-09T14:42:33Z 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705321812Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705328735Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705330828Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705332472Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705334105Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.70533685Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705338493Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705340377Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.70534223Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705343753Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705345235Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705346798Z level=info msg=Target target=[all] 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705355525Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705357329Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705358841Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705360364Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705361867Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=settings t=2026-03-09T14:42:33.705364773Z level=info msg="App mode production" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=sqlstore t=2026-03-09T14:42:33.705515807Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=sqlstore t=2026-03-09T14:42:33.705523651Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.706069698Z level=info msg="Starting DB migrations" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.712749566Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.734068815Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=21.311634ms 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.735872275Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.738233314Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.360478ms 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.739861404Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.740138285Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=276.911µs 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.741389347Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.742060437Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=671.182µs 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.743693379Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.745919012Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.224822ms 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.749874628Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.75130645Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=1.429518ms 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.752990737Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.755724546Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.733208ms 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.758615971Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.760926115Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.309722ms 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.762431815Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.765375179Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.94173ms 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.766604118Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.768803894Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.198855ms 2026-03-09T14:42:33.784 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.769882192Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.772020781Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.137368ms 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.773276322Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.77338692Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=111.439µs 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.774227521Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.774816898Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=589.138µs 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.77590835Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.776476599Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=568.449µs 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.77765321Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.777748399Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=94.077µs 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.778560567Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.780657829Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=2.095549ms 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.78166943Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.781890225Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=219.352µs 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.782688296Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.783900525Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.210325ms 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.785228591Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.787319603Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=2.090792ms 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.788146567Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-09T14:42:33.785 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.788719473Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=572.315µs 2026-03-09T14:42:33.813 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:42:33.904 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:42:33 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:42:33] "GET /metrics HTTP/1.1" 200 38253 "" "Prometheus/2.51.0" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.7904617Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.790561648Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=99.478µs 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.791398981Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.793609728Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=2.209644ms 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.79455818Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.796651657Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=2.093235ms 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.797574391Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.799634564Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=2.059462ms 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.800576976Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.802686382Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=2.108995ms 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.803455788Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.803549252Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=93.414µs 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.804314592Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.80639887Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=2.081954ms 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.807328829Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.809427314Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=2.098074ms 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.810425119Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.810531699Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=106.83µs 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.81147886Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.813698112Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=2.218249ms 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.814690979Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.816880615Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=2.189045ms 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.817943734Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.81837273Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=428.835µs 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.819539173Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.820248426Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=705.597µs 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.821458752Z level=info msg="Executing migration" id="create alert_image table" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.821871669Z level=info msg="Migration successfully executed" id="create alert_image table" duration=412.837µs 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.822997334Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.823505089Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=509.628µs 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.824639632Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.824738186Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=98.635µs 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.825730764Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.826185799Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=454.936µs 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.827256041Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.827802959Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=546.088µs 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.828738367Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T14:42:34.043 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.828972687Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.831206226Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.831814Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=607.683µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.83275107Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.833272862Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=521.911µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.834096059Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.836247564Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=2.151165ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.837172533Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.837188282Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=15.478µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.837986834Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.838081492Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=94.197µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.840720052Z level=info msg="Executing migration" id="create secrets table" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.841122008Z level=info msg="Migration successfully executed" id="create secrets table" duration=401.905µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.842212698Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.853134401Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=10.919799ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.854212938Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.856468849Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.25557ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.857429925Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.857599314Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=170.752µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.858354183Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.869052675Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=10.696629ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.870063956Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.881121334Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=11.05403ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.882169484Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.884461182Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.289524ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.885415777Z level=info msg="Executing migration" id="permission kind migration" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.887611184Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.195346ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.888606374Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.890716693Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.109878ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.891486549Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.893676707Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.189676ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.894636021Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.895169393Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=533.443µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.896451093Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.896920756Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=469.533µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.897979616Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.898493533Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=514.398µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.899293947Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.899690873Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=396.766µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.900777667Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.901241829Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=465.766µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.902509322Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.902537344Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=28.353µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.903490026Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.903509923Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=20.189µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.904805408Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.905120611Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=313.57µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.90600333Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.908517246Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=2.514026ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.909634417Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.911147862Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.513254ms 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.911873225Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.912017437Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=144.221µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.912958365Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.913242529Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=284.114µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.914002268Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.91442899Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=426.401µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.915507257Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.916009581Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=502.274µs 2026-03-09T14:42:34.044 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.917266965Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.919527505Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.260379ms 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.92038656Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.920414222Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=28.102µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.921922056Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.922343959Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=421.332µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.923378424Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.923858557Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=480.302µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.924912789Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.925448907Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=536.208µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.926484643Z level=info msg="Executing migration" id="add correlation config column" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.92875921Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.272412ms 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.929664973Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.930144003Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=477.347µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.930873855Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.931324342Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=450.257µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.932057771Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.938588949Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.531118ms 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.93953082Z level=info msg="Executing migration" id="create correlation v2" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.940033935Z level=info msg="Migration successfully executed" id="create correlation v2" duration=502.694µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.941193065Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.941659Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=465.986µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.942626068Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.94312711Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=500.973µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.94422272Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.944684699Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=457.82µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.945797591Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.94593548Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=137.618µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.94671786Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.947139343Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=421.453µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.947894283Z level=info msg="Executing migration" id="add provisioning column" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.950155203Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.26073ms 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.951262354Z level=info msg="Executing migration" id="create entity_events table" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.951616661Z level=info msg="Migration successfully executed" id="create entity_events table" duration=354.107µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.952547771Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.953012534Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=464.583µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.954076244Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.954316295Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.955065874Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.955282512Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.956351161Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.956733369Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=381.958µs 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.957629344Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-09T14:42:34.045 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.958088236Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=458.521µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.95880824Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.959285517Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=475.614µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.960277472Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.960791899Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=514.656µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.961871809Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.962346833Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=475.063µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.963070914Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.963572437Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=501.263µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.96428765Z level=info msg="Executing migration" id="Drop public config table" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.964721596Z level=info msg="Migration successfully executed" id="Drop public config table" duration=434.456µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.965662486Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.966266892Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=604.066µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.967285837Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.967830892Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=544.062µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.968728079Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.969148039Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=420.17µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.970023625Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.970490983Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=465.044µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.971443093Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.97794159Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=6.498857ms 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.979114514Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.981446398Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.331884ms 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.982497324Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.98480911Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.311556ms 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.985641345Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.985743046Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=101.621µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.98662855Z level=info msg="Executing migration" id="add share column" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.988924206Z level=info msg="Migration successfully executed" id="add share column" duration=2.295536ms 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.989934796Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.990021679Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=86.884µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.990940076Z level=info msg="Executing migration" id="create file table" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.991349947Z level=info msg="Migration successfully executed" id="create file table" duration=409.751µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.992789192Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.993232786Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=442.502µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.99422477Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.994618421Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=391.937µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.995606528Z level=info msg="Executing migration" id="create file_meta table" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.995915319Z level=info msg="Migration successfully executed" id="create file_meta table" duration=308.781µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.997113121Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.997516037Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=402.326µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.998481012Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.998504687Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=23.946µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.999428103Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:33 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:33.999455094Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=27.342µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.000546726Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.002117328Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.571223ms 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.002929876Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.003569688Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=639.822µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.004287017Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.004851158Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=564.2µs 2026-03-09T14:42:34.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.005735009Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.008222145Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=2.485083ms 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.009250328Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.009322794Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=72.036µs 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.010263733Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.0106928Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=429.498µs 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.011676538Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.0118476Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=171.362µs 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.012697999Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.013069247Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=371.578µs 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.013779793Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.013987383Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=207.68µs 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.014869452Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.017301904Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.430599ms 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.01818319Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.021041916Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=2.858395ms 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.021954701Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.022404897Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=450.156µs 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.023116645Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-09T14:42:34.047 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.047216893Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=24.082063ms 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (4m) 17s ago 9m 13.7M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 starting - - - - 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (22s) 17s ago 9m 76.0M - 3.9 654f31e6858e fe7cab5d4b5d 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 running (4m) 17s ago 12m 466M - 19.2.3-678-ge911bdeb 654f31e6858e d35dddd392d1 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (4m) 17s ago 13m 538M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (3m) 17s ago 13m 53.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bcdaa5dfc948 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (3m) 17s ago 12m 44.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1caba9bf8a13 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (4m) 17s ago 12m 51.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ff7dfe3a6c7c 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (4m) 17s ago 10m 7719k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (4m) 17s ago 10m 7531k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (2m) 17s ago 12m 53.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 24632814894d 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (2m) 17s ago 12m 75.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1f773b5d0f68 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (2m) 17s ago 11m 70.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7d943c2f091c 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (3m) 17s ago 11m 56.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7c234b83449a 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (115s) 17s ago 11m 54.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 811379ab4ba5 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (98s) 17s ago 11m 71.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e bc7e71aa5718 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (81s) 17s ago 10m 48.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 20bc2716b966 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (65s) 17s ago 10m 71.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2557f7ad255a 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (4m) 17s ago 10m 40.7M - 2.51.0 1d3b7f56885b e88f0339687c 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (48s) 17s ago 9m 91.4M - 19.2.3-678-ge911bdeb 654f31e6858e df702c44464d 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (47s) 17s ago 9m 91.3M - 19.2.3-678-ge911bdeb 654f31e6858e 75ca9d41b995 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (50s) 17s ago 9m 91.4M - 19.2.3-678-ge911bdeb 654f31e6858e 9a13050e9ad3 2026-03-09T14:42:34.201 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (45s) 17s ago 9m 93.3M - 19.2.3-678-ge911bdeb 654f31e6858e 3dd8df0c45b8 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.049540721Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.050385589Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=846.882µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.051798696Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.05303981Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.23926ms 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.054956383Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.06647654Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=11.518023ms 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.068121712Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.070639206Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.51554ms 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.071724987Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.071968755Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=233.218µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.073015483Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.073207393Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=192.882µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.074106595Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.074659093Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=552.548µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.075888404Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.07676925Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=880.506µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.078053334Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.078276122Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=222.879µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.079089923Z level=info msg="Executing migration" id="create folder table" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.079608979Z level=info msg="Migration successfully executed" id="create folder table" duration=518.705µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.080787283Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.081409443Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=622.27µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.082520081Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.083073081Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=551.668µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.084146149Z level=info msg="Executing migration" id="Update folder title length" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.084158011Z level=info msg="Migration successfully executed" id="Update folder title length" duration=12.243µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.085206641Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.0858332Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=627.701µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.086866823Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.087414302Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=547.399µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.088274599Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.088841204Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=566.425µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.09043977Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.09081232Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=372.549µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.124477849Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.125093737Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=619.114µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.169166271Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.170024785Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=861.429µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.17097979Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.171582544Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=602.803µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.172517371Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.173058398Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=539.734µs 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.173881034Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-09T14:42:34.297 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.174481223Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=600.149µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.175292518Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.175828615Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=536.148µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.176774905Z level=info msg="Executing migration" id="create anon_device table" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.177233516Z level=info msg="Migration successfully executed" id="create anon_device table" duration=458.441µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.178037128Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.178624582Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=587.674µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.179782599Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.180374012Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=592.245µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.181456908Z level=info msg="Executing migration" id="create signing_key table" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.181947149Z level=info msg="Migration successfully executed" id="create signing_key table" duration=490.171µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.183073385Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.183699342Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=625.847µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.185005828Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.185603182Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=597.414µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.186457377Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.186718949Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=261.882µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.187558919Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.190050292Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.490853ms 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.191059849Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.192231803Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.172605ms 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.194349715Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.195005137Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=655.622µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.195875964Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.196471603Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=595.699µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.197439493Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.197973707Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=534.365µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.198773291Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.199411581Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=638.06µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.200257191Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.200883468Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=626.246µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.201824758Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.202454542Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=629.283µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.203314118Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.203929734Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=615.777µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.204883738Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.205282608Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=398.37µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.206126114Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.206152523Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=26.86µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.207004004Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.209758292Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.753456ms 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.210795131Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.213488134Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.691951ms 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.214545702Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.214843331Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=297.578µs 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=migrator t=2026-03-09T14:42:34.215703168Z level=info msg="migrations completed" performed=169 skipped=378 duration=503.022992ms 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=sqlstore t=2026-03-09T14:42:34.216300902Z level=info msg="Created default organization" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=secrets t=2026-03-09T14:42:34.218956504Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=plugin.store t=2026-03-09T14:42:34.228266948Z level=info msg="Loading plugins..." 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=local.finder t=2026-03-09T14:42:34.268135287Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=plugin.store t=2026-03-09T14:42:34.268153411Z level=info msg="Plugins loaded" count=55 duration=39.887495ms 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=query_data t=2026-03-09T14:42:34.270661266Z level=info msg="Query Service initialization" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=live.push_http t=2026-03-09T14:42:34.27822723Z level=info msg="Live Push Gateway initialization" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=ngalert.migration t=2026-03-09T14:42:34.280879466Z level=info msg=Starting 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=ngalert t=2026-03-09T14:42:34.286553402Z level=warn msg="Unexpected number of rows updating alert configuration history" rows=0 org=1 hash=not-yet-calculated 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=ngalert.state.manager t=2026-03-09T14:42:34.287453004Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=infra.usagestats.collector t=2026-03-09T14:42:34.288677786Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=provisioning.datasources t=2026-03-09T14:42:34.291262124Z level=info msg="deleted datasource based on configuration" name=Dashboard1 2026-03-09T14:42:34.298 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=provisioning.datasources t=2026-03-09T14:42:34.291596553Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T14:42:34.299 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[41290]: ts=2026-03-09T14:42:34.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.3\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.3\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.107\", device_class=\"hdd\", hostname=\"vm07\", instance=\"192.168.123.111:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.107\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 17 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:42:34.447 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:42:34.653 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:34 vm07 bash[55244]: cluster 2026-03-09T14:42:32.570290+0000 mgr.y (mgr.44103) 273 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:34 vm07 bash[55244]: cluster 2026-03-09T14:42:32.570290+0000 mgr.y (mgr.44103) 273 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:34 vm07 bash[55244]: audit 2026-03-09T14:42:33.355466+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:34 vm07 bash[55244]: audit 2026-03-09T14:42:33.355466+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:34 vm07 bash[55244]: audit 2026-03-09T14:42:33.360199+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:34 vm07 bash[55244]: audit 2026-03-09T14:42:33.360199+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:34 vm07 bash[56315]: cluster 2026-03-09T14:42:32.570290+0000 mgr.y (mgr.44103) 273 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:34 vm07 bash[56315]: cluster 2026-03-09T14:42:32.570290+0000 mgr.y (mgr.44103) 273 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:34 vm07 bash[56315]: audit 2026-03-09T14:42:33.355466+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:34 vm07 bash[56315]: audit 2026-03-09T14:42:33.355466+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:34 vm07 bash[56315]: audit 2026-03-09T14:42:33.360199+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:34 vm07 bash[56315]: audit 2026-03-09T14:42:33.360199+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": true, 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [ 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "rgw", 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "mgr", 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "iscsi", 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "mon", 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "osd" 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: ], 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "progress": "18/23 daemons upgraded", 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "message": "Currently upgrading grafana daemons", 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:42:34.658 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=provisioning.alerting t=2026-03-09T14:42:34.301974814Z level=info msg="starting to provision alerting" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=provisioning.alerting t=2026-03-09T14:42:34.301990513Z level=info msg="finished to provision alerting" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=provisioning.dashboard t=2026-03-09T14:42:34.303952052Z level=info msg="starting to provision dashboards" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=http.server t=2026-03-09T14:42:34.305559603Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=http.server t=2026-03-09T14:42:34.305939718Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=ngalert.state.manager t=2026-03-09T14:42:34.306133513Z level=info msg="Warming state cache for startup" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=ngalert.multiorg.alertmanager t=2026-03-09T14:42:34.306639223Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=grafanaStorageLogger t=2026-03-09T14:42:34.308750132Z level=info msg="Storage starting" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=sqlstore.transactions t=2026-03-09T14:42:34.360679175Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=plugins.update.checker t=2026-03-09T14:42:34.378837221Z level=info msg="Update check succeeded" duration=71.057805ms 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=sqlstore.transactions t=2026-03-09T14:42:34.381098772Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=ngalert.state.manager t=2026-03-09T14:42:34.414244595Z level=info msg="State cache has been initialized" states=0 duration=108.11049ms 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=ngalert.scheduler t=2026-03-09T14:42:34.423288086Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=ticker t=2026-03-09T14:42:34.423424542Z level=info msg=starting first_tick=2026-03-09T14:42:40Z 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=provisioning.dashboard t=2026-03-09T14:42:34.504805614Z level=info msg="finished to provision dashboards" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=grafana-apiserver t=2026-03-09T14:42:34.508138851Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:42:34 vm11 bash[59245]: logger=grafana-apiserver t=2026-03-09T14:42:34.508852392Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-09T14:42:34.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:34 vm11 bash[43577]: cluster 2026-03-09T14:42:32.570290+0000 mgr.y (mgr.44103) 273 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:34.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:34 vm11 bash[43577]: cluster 2026-03-09T14:42:32.570290+0000 mgr.y (mgr.44103) 273 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-09T14:42:34.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:34 vm11 bash[43577]: audit 2026-03-09T14:42:33.355466+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:34 vm11 bash[43577]: audit 2026-03-09T14:42:33.355466+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:34 vm11 bash[43577]: audit 2026-03-09T14:42:33.360199+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:34 vm11 bash[43577]: audit 2026-03-09T14:42:33.360199+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:34.893 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:42:35.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:35 vm11 bash[43577]: audit 2026-03-09T14:42:33.811858+0000 mgr.y (mgr.44103) 274 : audit [DBG] from='client.54416 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:35 vm11 bash[43577]: audit 2026-03-09T14:42:33.811858+0000 mgr.y (mgr.44103) 274 : audit [DBG] from='client.54416 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:35 vm11 bash[43577]: audit 2026-03-09T14:42:34.009389+0000 mgr.y (mgr.44103) 275 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:35 vm11 bash[43577]: audit 2026-03-09T14:42:34.009389+0000 mgr.y (mgr.44103) 275 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:35 vm11 bash[43577]: audit 2026-03-09T14:42:34.206284+0000 mgr.y (mgr.44103) 276 : audit [DBG] from='client.44538 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:35 vm11 bash[43577]: audit 2026-03-09T14:42:34.206284+0000 mgr.y (mgr.44103) 276 : audit [DBG] from='client.44538 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:35 vm11 bash[43577]: audit 2026-03-09T14:42:34.456260+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.107:0/838585822' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:35.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:35 vm11 bash[43577]: audit 2026-03-09T14:42:34.456260+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.107:0/838585822' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:35.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:35 vm11 bash[43577]: audit 2026-03-09T14:42:34.898070+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.107:0/3667830581' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:35.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:35 vm11 bash[43577]: audit 2026-03-09T14:42:34.898070+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.107:0/3667830581' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:35 vm07 bash[55244]: audit 2026-03-09T14:42:33.811858+0000 mgr.y (mgr.44103) 274 : audit [DBG] from='client.54416 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:35 vm07 bash[55244]: audit 2026-03-09T14:42:33.811858+0000 mgr.y (mgr.44103) 274 : audit [DBG] from='client.54416 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:35 vm07 bash[55244]: audit 2026-03-09T14:42:34.009389+0000 mgr.y (mgr.44103) 275 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:35 vm07 bash[55244]: audit 2026-03-09T14:42:34.009389+0000 mgr.y (mgr.44103) 275 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:35 vm07 bash[55244]: audit 2026-03-09T14:42:34.206284+0000 mgr.y (mgr.44103) 276 : audit [DBG] from='client.44538 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:35 vm07 bash[55244]: audit 2026-03-09T14:42:34.206284+0000 mgr.y (mgr.44103) 276 : audit [DBG] from='client.44538 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:35 vm07 bash[55244]: audit 2026-03-09T14:42:34.456260+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.107:0/838585822' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:35 vm07 bash[55244]: audit 2026-03-09T14:42:34.456260+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.107:0/838585822' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:35 vm07 bash[55244]: audit 2026-03-09T14:42:34.898070+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.107:0/3667830581' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:35 vm07 bash[55244]: audit 2026-03-09T14:42:34.898070+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.107:0/3667830581' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:35 vm07 bash[56315]: audit 2026-03-09T14:42:33.811858+0000 mgr.y (mgr.44103) 274 : audit [DBG] from='client.54416 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:35 vm07 bash[56315]: audit 2026-03-09T14:42:33.811858+0000 mgr.y (mgr.44103) 274 : audit [DBG] from='client.54416 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:35 vm07 bash[56315]: audit 2026-03-09T14:42:34.009389+0000 mgr.y (mgr.44103) 275 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:35 vm07 bash[56315]: audit 2026-03-09T14:42:34.009389+0000 mgr.y (mgr.44103) 275 : audit [DBG] from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:35 vm07 bash[56315]: audit 2026-03-09T14:42:34.206284+0000 mgr.y (mgr.44103) 276 : audit [DBG] from='client.44538 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:35 vm07 bash[56315]: audit 2026-03-09T14:42:34.206284+0000 mgr.y (mgr.44103) 276 : audit [DBG] from='client.44538 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:35 vm07 bash[56315]: audit 2026-03-09T14:42:34.456260+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.107:0/838585822' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:35 vm07 bash[56315]: audit 2026-03-09T14:42:34.456260+0000 mon.c (mon.1) 25 : audit [DBG] from='client.? 192.168.123.107:0/838585822' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:35 vm07 bash[56315]: audit 2026-03-09T14:42:34.898070+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.107:0/3667830581' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:35.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:35 vm07 bash[56315]: audit 2026-03-09T14:42:34.898070+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.107:0/3667830581' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:42:36.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:36 vm11 bash[43577]: cluster 2026-03-09T14:42:34.570765+0000 mgr.y (mgr.44103) 277 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T14:42:36.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:36 vm11 bash[43577]: cluster 2026-03-09T14:42:34.570765+0000 mgr.y (mgr.44103) 277 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T14:42:36.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:36 vm11 bash[43577]: audit 2026-03-09T14:42:34.667462+0000 mgr.y (mgr.44103) 278 : audit [DBG] from='client.34543 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:36.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:36 vm11 bash[43577]: audit 2026-03-09T14:42:34.667462+0000 mgr.y (mgr.44103) 278 : audit [DBG] from='client.34543 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:36.903 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:36 vm07 bash[55244]: cluster 2026-03-09T14:42:34.570765+0000 mgr.y (mgr.44103) 277 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T14:42:36.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:36 vm07 bash[55244]: cluster 2026-03-09T14:42:34.570765+0000 mgr.y (mgr.44103) 277 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T14:42:36.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:36 vm07 bash[55244]: audit 2026-03-09T14:42:34.667462+0000 mgr.y (mgr.44103) 278 : audit [DBG] from='client.34543 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:36.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:36 vm07 bash[55244]: audit 2026-03-09T14:42:34.667462+0000 mgr.y (mgr.44103) 278 : audit [DBG] from='client.34543 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:36.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:36 vm07 bash[56315]: cluster 2026-03-09T14:42:34.570765+0000 mgr.y (mgr.44103) 277 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T14:42:36.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:36 vm07 bash[56315]: cluster 2026-03-09T14:42:34.570765+0000 mgr.y (mgr.44103) 277 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-09T14:42:36.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:36 vm07 bash[56315]: audit 2026-03-09T14:42:34.667462+0000 mgr.y (mgr.44103) 278 : audit [DBG] from='client.34543 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:36.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:36 vm07 bash[56315]: audit 2026-03-09T14:42:34.667462+0000 mgr.y (mgr.44103) 278 : audit [DBG] from='client.34543 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:42:37.252 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:42:36 vm11 bash[41290]: ts=2026-03-09T14:42:36.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"f59f9828-1bc3-11f1-bfd8-7b3d0c866040\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-09T14:42:38.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:38 vm11 bash[43577]: cluster 2026-03-09T14:42:36.571219+0000 mgr.y (mgr.44103) 279 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:38.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:38 vm11 bash[43577]: cluster 2026-03-09T14:42:36.571219+0000 mgr.y (mgr.44103) 279 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:38.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:38 vm11 bash[43577]: audit 2026-03-09T14:42:37.576494+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:38.752 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:38 vm11 bash[43577]: audit 2026-03-09T14:42:37.576494+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:38.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:38 vm07 bash[55244]: cluster 2026-03-09T14:42:36.571219+0000 mgr.y (mgr.44103) 279 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:38.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:38 vm07 bash[55244]: cluster 2026-03-09T14:42:36.571219+0000 mgr.y (mgr.44103) 279 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:38.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:38 vm07 bash[55244]: audit 2026-03-09T14:42:37.576494+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:38.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:38 vm07 bash[55244]: audit 2026-03-09T14:42:37.576494+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:38.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:38 vm07 bash[56315]: cluster 2026-03-09T14:42:36.571219+0000 mgr.y (mgr.44103) 279 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:38.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:38 vm07 bash[56315]: cluster 2026-03-09T14:42:36.571219+0000 mgr.y (mgr.44103) 279 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:38.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:38 vm07 bash[56315]: audit 2026-03-09T14:42:37.576494+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:38.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:38 vm07 bash[56315]: audit 2026-03-09T14:42:37.576494+0000 mon.a (mon.0) 680 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: cluster 2026-03-09T14:42:38.571585+0000 mgr.y (mgr.44103) 280 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: cluster 2026-03-09T14:42:38.571585+0000 mgr.y (mgr.44103) 280 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:38.596021+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:38.596021+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:38.611063+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:38.611063+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:38.689916+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:38.689916+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:38.695715+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:38.695715+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:39.249119+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:39.249119+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:39.256521+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:39 vm07 bash[55244]: audit 2026-03-09T14:42:39.256521+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: cluster 2026-03-09T14:42:38.571585+0000 mgr.y (mgr.44103) 280 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: cluster 2026-03-09T14:42:38.571585+0000 mgr.y (mgr.44103) 280 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:38.596021+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:38.596021+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:38.611063+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:38.611063+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:38.689916+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:38.689916+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:38.695715+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:38.695715+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:39.249119+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:39.249119+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:39.256521+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:39.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:39 vm07 bash[56315]: audit 2026-03-09T14:42:39.256521+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: cluster 2026-03-09T14:42:38.571585+0000 mgr.y (mgr.44103) 280 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: cluster 2026-03-09T14:42:38.571585+0000 mgr.y (mgr.44103) 280 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:38.596021+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:38.596021+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:38.611063+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:38.611063+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:38.689916+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:38.689916+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:38.695715+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:38.695715+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:39.249119+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:39.249119+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:39.256521+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:40.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:39 vm11 bash[43577]: audit 2026-03-09T14:42:39.256521+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:41.903 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:41 vm07 bash[55244]: cluster 2026-03-09T14:42:40.571925+0000 mgr.y (mgr.44103) 281 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:41.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:41 vm07 bash[55244]: cluster 2026-03-09T14:42:40.571925+0000 mgr.y (mgr.44103) 281 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:41 vm07 bash[56315]: cluster 2026-03-09T14:42:40.571925+0000 mgr.y (mgr.44103) 281 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:41.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:41 vm07 bash[56315]: cluster 2026-03-09T14:42:40.571925+0000 mgr.y (mgr.44103) 281 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:41 vm11 bash[43577]: cluster 2026-03-09T14:42:40.571925+0000 mgr.y (mgr.44103) 281 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:42.003 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:41 vm11 bash[43577]: cluster 2026-03-09T14:42:40.571925+0000 mgr.y (mgr.44103) 281 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-09T14:42:42.903 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:42 vm07 bash[55244]: audit 2026-03-09T14:42:42.158401+0000 mgr.y (mgr.44103) 282 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:42.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:42 vm07 bash[55244]: audit 2026-03-09T14:42:42.158401+0000 mgr.y (mgr.44103) 282 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:42.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:42 vm07 bash[56315]: audit 2026-03-09T14:42:42.158401+0000 mgr.y (mgr.44103) 282 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:42.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:42 vm07 bash[56315]: audit 2026-03-09T14:42:42.158401+0000 mgr.y (mgr.44103) 282 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:43.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:42 vm11 bash[43577]: audit 2026-03-09T14:42:42.158401+0000 mgr.y (mgr.44103) 282 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:43.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:42 vm11 bash[43577]: audit 2026-03-09T14:42:42.158401+0000 mgr.y (mgr.44103) 282 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:43.903 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:43 vm07 bash[55244]: cluster 2026-03-09T14:42:42.572311+0000 mgr.y (mgr.44103) 283 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:43.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:43 vm07 bash[55244]: cluster 2026-03-09T14:42:42.572311+0000 mgr.y (mgr.44103) 283 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:43.904 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:42:43 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:42:43] "GET /metrics HTTP/1.1" 200 38252 "" "Prometheus/2.51.0" 2026-03-09T14:42:43.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:43 vm07 bash[56315]: cluster 2026-03-09T14:42:42.572311+0000 mgr.y (mgr.44103) 283 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:43.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:43 vm07 bash[56315]: cluster 2026-03-09T14:42:42.572311+0000 mgr.y (mgr.44103) 283 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:44.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:43 vm11 bash[43577]: cluster 2026-03-09T14:42:42.572311+0000 mgr.y (mgr.44103) 283 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:44.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:43 vm11 bash[43577]: cluster 2026-03-09T14:42:42.572311+0000 mgr.y (mgr.44103) 283 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: cluster 2026-03-09T14:42:44.572683+0000 mgr.y (mgr.44103) 284 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: cluster 2026-03-09T14:42:44.572683+0000 mgr.y (mgr.44103) 284 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.786527+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.786527+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.790599+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.790599+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.791512+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.791512+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.791927+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.791927+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.795098+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.795098+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.809065+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.809065+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.809294+0000 mgr.y (mgr.44103) 285 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.809294+0000 mgr.y (mgr.44103) 285 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.838483+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.838483+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.839607+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.839607+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.840554+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.840554+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.841230+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.841230+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.842557+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.842557+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.843792+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.843792+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.844848+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.844848+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.845543+0000 mon.a (mon.0) 700 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.845543+0000 mon.a (mon.0) 700 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.846220+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.846220+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.846893+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.846893+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.847631+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.847631+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.848292+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.848292+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.850207+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.850207+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.851158+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.851158+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.852022+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.852022+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.852909+0000 mon.a (mon.0) 708 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.852909+0000 mon.a (mon.0) 708 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.853781+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.853781+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.854520+0000 mon.a (mon.0) 710 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.854520+0000 mon.a (mon.0) 710 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.855234+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.855234+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: cephadm 2026-03-09T14:42:44.855699+0000 mgr.y (mgr.44103) 286 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: cephadm 2026-03-09T14:42:44.855699+0000 mgr.y (mgr.44103) 286 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.859010+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.859010+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.861383+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.861383+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.863665+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.863665+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.865578+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.865578+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.867949+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.867949+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.870785+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.870785+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.905637+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.905637+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.908213+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.908213+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.973285+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.973285+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.974844+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.974844+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.977745+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.977745+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.980319+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.980319+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.982717+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.982717+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.985006+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.985006+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.987177+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.987177+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.988147+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.988147+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.988576+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.988576+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.992045+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.992045+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.993044+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.993044+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.996567+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.996567+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.997598+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:44.997598+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.001601+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.001601+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.002789+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T14:42:46.156 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.002789+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.006615+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.006615+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.007747+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.007747+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.008156+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.008156+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.008758+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.008758+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.009165+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.009165+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.009547+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.009547+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.009930+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.009930+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: cephadm 2026-03-09T14:42:45.010297+0000 mgr.y (mgr.44103) 287 : cephadm [INF] Upgrade: Complete! 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: cephadm 2026-03-09T14:42:45.010297+0000 mgr.y (mgr.44103) 287 : cephadm [INF] Upgrade: Complete! 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.010641+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.010641+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.013687+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.013687+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.014499+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.014499+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.015095+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.015095+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.018935+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.018935+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.059166+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.059166+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.059797+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.059797+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.064499+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:45 vm07 bash[56315]: audit 2026-03-09T14:42:45.064499+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: cluster 2026-03-09T14:42:44.572683+0000 mgr.y (mgr.44103) 284 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: cluster 2026-03-09T14:42:44.572683+0000 mgr.y (mgr.44103) 284 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.786527+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.786527+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.790599+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.790599+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.791512+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.791512+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.791927+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.791927+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.795098+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.795098+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.809065+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.809065+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.809294+0000 mgr.y (mgr.44103) 285 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.809294+0000 mgr.y (mgr.44103) 285 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.838483+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.838483+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.839607+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.839607+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.840554+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.840554+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.157 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.841230+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.841230+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.842557+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.842557+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.843792+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.843792+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.844848+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.844848+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.845543+0000 mon.a (mon.0) 700 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.845543+0000 mon.a (mon.0) 700 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.846220+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.846220+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.846893+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.846893+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.847631+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.847631+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.848292+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.848292+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.850207+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.850207+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.851158+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.851158+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.852022+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.852022+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.852909+0000 mon.a (mon.0) 708 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.852909+0000 mon.a (mon.0) 708 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.853781+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.853781+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.854520+0000 mon.a (mon.0) 710 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.854520+0000 mon.a (mon.0) 710 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.855234+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.855234+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: cephadm 2026-03-09T14:42:44.855699+0000 mgr.y (mgr.44103) 286 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: cephadm 2026-03-09T14:42:44.855699+0000 mgr.y (mgr.44103) 286 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.859010+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.859010+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.861383+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.861383+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.863665+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.863665+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.865578+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.865578+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.867949+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.867949+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.870785+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.870785+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.905637+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.905637+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.908213+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.908213+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.973285+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.973285+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.974844+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.974844+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.977745+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.977745+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.980319+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.980319+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T14:42:46.158 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.982717+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.982717+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.985006+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.985006+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.987177+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.987177+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.988147+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.988147+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.988576+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.988576+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.992045+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.992045+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.993044+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.993044+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.996567+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.996567+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.997598+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:44.997598+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.001601+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.001601+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.002789+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.002789+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.006615+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.006615+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.007747+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.007747+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.008156+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.008156+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.008758+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.008758+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.009165+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.009165+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.009547+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.009547+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.009930+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.009930+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: cephadm 2026-03-09T14:42:45.010297+0000 mgr.y (mgr.44103) 287 : cephadm [INF] Upgrade: Complete! 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: cephadm 2026-03-09T14:42:45.010297+0000 mgr.y (mgr.44103) 287 : cephadm [INF] Upgrade: Complete! 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.010641+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.010641+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.013687+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.013687+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.014499+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.014499+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.015095+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.015095+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.018935+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.018935+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.059166+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.059166+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.059797+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.059797+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.064499+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.159 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:45 vm07 bash[55244]: audit 2026-03-09T14:42:45.064499+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: cluster 2026-03-09T14:42:44.572683+0000 mgr.y (mgr.44103) 284 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: cluster 2026-03-09T14:42:44.572683+0000 mgr.y (mgr.44103) 284 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.786527+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.786527+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.790599+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.790599+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.791512+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.791512+0000 mon.a (mon.0) 689 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.791927+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.791927+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.795098+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.795098+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.809065+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.809065+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.809294+0000 mgr.y (mgr.44103) 285 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.809294+0000 mgr.y (mgr.44103) 285 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.838483+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.838483+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.839607+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.839607+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.840554+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.840554+0000 mon.a (mon.0) 695 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.841230+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.841230+0000 mon.a (mon.0) 696 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.842557+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.842557+0000 mon.a (mon.0) 697 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.843792+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.843792+0000 mon.a (mon.0) 698 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.844848+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.844848+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.845543+0000 mon.a (mon.0) 700 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.845543+0000 mon.a (mon.0) 700 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.846220+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.846220+0000 mon.a (mon.0) 701 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.846893+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.846893+0000 mon.a (mon.0) 702 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.253 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.847631+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.847631+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.848292+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.848292+0000 mon.a (mon.0) 704 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.850207+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.850207+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.851158+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.851158+0000 mon.a (mon.0) 706 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.852022+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.852022+0000 mon.a (mon.0) 707 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.852909+0000 mon.a (mon.0) 708 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.852909+0000 mon.a (mon.0) 708 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.853781+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.853781+0000 mon.a (mon.0) 709 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.854520+0000 mon.a (mon.0) 710 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.854520+0000 mon.a (mon.0) 710 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.855234+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.855234+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: cephadm 2026-03-09T14:42:44.855699+0000 mgr.y (mgr.44103) 286 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: cephadm 2026-03-09T14:42:44.855699+0000 mgr.y (mgr.44103) 286 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.859010+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.859010+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.861383+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.861383+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.863665+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.863665+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.865578+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.865578+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.867949+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.867949+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.870785+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.870785+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.905637+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.905637+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.908213+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.908213+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.973285+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.973285+0000 mon.a (mon.0) 720 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.974844+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.974844+0000 mon.a (mon.0) 721 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.977745+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.977745+0000 mon.a (mon.0) 722 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.980319+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.980319+0000 mon.a (mon.0) 723 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.982717+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.982717+0000 mon.a (mon.0) 724 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.985006+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.985006+0000 mon.a (mon.0) 725 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.987177+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.987177+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.988147+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.988147+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.988576+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.988576+0000 mon.a (mon.0) 728 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.992045+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.992045+0000 mon.a (mon.0) 729 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.993044+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.993044+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.996567+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.996567+0000 mon.a (mon.0) 731 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-09T14:42:46.254 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.997598+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:44.997598+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.001601+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.001601+0000 mon.a (mon.0) 733 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.002789+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.002789+0000 mon.a (mon.0) 734 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.006615+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.006615+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.007747+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.007747+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.008156+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.008156+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.008758+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.008758+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.009165+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.009165+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.009547+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.009547+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.009930+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.009930+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: cephadm 2026-03-09T14:42:45.010297+0000 mgr.y (mgr.44103) 287 : cephadm [INF] Upgrade: Complete! 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: cephadm 2026-03-09T14:42:45.010297+0000 mgr.y (mgr.44103) 287 : cephadm [INF] Upgrade: Complete! 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.010641+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.010641+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.013687+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.013687+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.014499+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.014499+0000 mon.a (mon.0) 744 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.015095+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.015095+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.018935+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.018935+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.059166+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.059166+0000 mon.a (mon.0) 747 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.059797+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.059797+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.064499+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:46.255 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:45 vm11 bash[43577]: audit 2026-03-09T14:42:45.064499+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:48.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:47 vm07 bash[55244]: cluster 2026-03-09T14:42:46.573118+0000 mgr.y (mgr.44103) 288 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:48.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:47 vm07 bash[55244]: cluster 2026-03-09T14:42:46.573118+0000 mgr.y (mgr.44103) 288 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:48.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:47 vm07 bash[55244]: audit 2026-03-09T14:42:47.663494+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:48.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:47 vm07 bash[55244]: audit 2026-03-09T14:42:47.663494+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:48.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:47 vm07 bash[56315]: cluster 2026-03-09T14:42:46.573118+0000 mgr.y (mgr.44103) 288 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:48.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:47 vm07 bash[56315]: cluster 2026-03-09T14:42:46.573118+0000 mgr.y (mgr.44103) 288 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:48.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:47 vm07 bash[56315]: audit 2026-03-09T14:42:47.663494+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:48.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:47 vm07 bash[56315]: audit 2026-03-09T14:42:47.663494+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:48.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:47 vm11 bash[43577]: cluster 2026-03-09T14:42:46.573118+0000 mgr.y (mgr.44103) 288 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:48.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:47 vm11 bash[43577]: cluster 2026-03-09T14:42:46.573118+0000 mgr.y (mgr.44103) 288 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:48.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:47 vm11 bash[43577]: audit 2026-03-09T14:42:47.663494+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:48.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:47 vm11 bash[43577]: audit 2026-03-09T14:42:47.663494+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:42:50.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:49 vm07 bash[55244]: cluster 2026-03-09T14:42:48.573437+0000 mgr.y (mgr.44103) 289 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:50.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:49 vm07 bash[55244]: cluster 2026-03-09T14:42:48.573437+0000 mgr.y (mgr.44103) 289 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:50.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:49 vm07 bash[56315]: cluster 2026-03-09T14:42:48.573437+0000 mgr.y (mgr.44103) 289 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:50.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:49 vm07 bash[56315]: cluster 2026-03-09T14:42:48.573437+0000 mgr.y (mgr.44103) 289 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:50.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:49 vm11 bash[43577]: cluster 2026-03-09T14:42:48.573437+0000 mgr.y (mgr.44103) 289 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:50.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:49 vm11 bash[43577]: cluster 2026-03-09T14:42:48.573437+0000 mgr.y (mgr.44103) 289 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:52.153 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:51 vm07 bash[55244]: cluster 2026-03-09T14:42:50.573778+0000 mgr.y (mgr.44103) 290 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:52.153 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:51 vm07 bash[55244]: cluster 2026-03-09T14:42:50.573778+0000 mgr.y (mgr.44103) 290 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:52.153 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:51 vm07 bash[56315]: cluster 2026-03-09T14:42:50.573778+0000 mgr.y (mgr.44103) 290 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:52.153 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:51 vm07 bash[56315]: cluster 2026-03-09T14:42:50.573778+0000 mgr.y (mgr.44103) 290 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:52.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:51 vm11 bash[43577]: cluster 2026-03-09T14:42:50.573778+0000 mgr.y (mgr.44103) 290 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:52.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:51 vm11 bash[43577]: cluster 2026-03-09T14:42:50.573778+0000 mgr.y (mgr.44103) 290 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:53.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:52 vm07 bash[55244]: audit 2026-03-09T14:42:52.162222+0000 mgr.y (mgr.44103) 291 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:53.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:52 vm07 bash[55244]: audit 2026-03-09T14:42:52.162222+0000 mgr.y (mgr.44103) 291 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:53.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:52 vm07 bash[55244]: audit 2026-03-09T14:42:52.576275+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:53.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:52 vm07 bash[55244]: audit 2026-03-09T14:42:52.576275+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:53.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:52 vm07 bash[56315]: audit 2026-03-09T14:42:52.162222+0000 mgr.y (mgr.44103) 291 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:53.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:52 vm07 bash[56315]: audit 2026-03-09T14:42:52.162222+0000 mgr.y (mgr.44103) 291 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:53.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:52 vm07 bash[56315]: audit 2026-03-09T14:42:52.576275+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:53.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:52 vm07 bash[56315]: audit 2026-03-09T14:42:52.576275+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:53.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:52 vm11 bash[43577]: audit 2026-03-09T14:42:52.162222+0000 mgr.y (mgr.44103) 291 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:53.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:52 vm11 bash[43577]: audit 2026-03-09T14:42:52.162222+0000 mgr.y (mgr.44103) 291 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:42:53.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:52 vm11 bash[43577]: audit 2026-03-09T14:42:52.576275+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:53.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:52 vm11 bash[43577]: audit 2026-03-09T14:42:52.576275+0000 mon.a (mon.0) 751 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:42:53.806 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:42:53 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:42:53] "GET /metrics HTTP/1.1" 200 38250 "" "Prometheus/2.51.0" 2026-03-09T14:42:54.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:53 vm07 bash[55244]: cluster 2026-03-09T14:42:52.574136+0000 mgr.y (mgr.44103) 292 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:54.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:53 vm07 bash[55244]: cluster 2026-03-09T14:42:52.574136+0000 mgr.y (mgr.44103) 292 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:54.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:53 vm07 bash[56315]: cluster 2026-03-09T14:42:52.574136+0000 mgr.y (mgr.44103) 292 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:54.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:53 vm07 bash[56315]: cluster 2026-03-09T14:42:52.574136+0000 mgr.y (mgr.44103) 292 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:54.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:53 vm11 bash[43577]: cluster 2026-03-09T14:42:52.574136+0000 mgr.y (mgr.44103) 292 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:54.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:53 vm11 bash[43577]: cluster 2026-03-09T14:42:52.574136+0000 mgr.y (mgr.44103) 292 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:56.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:55 vm07 bash[55244]: cluster 2026-03-09T14:42:54.574541+0000 mgr.y (mgr.44103) 293 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:56.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:55 vm07 bash[55244]: cluster 2026-03-09T14:42:54.574541+0000 mgr.y (mgr.44103) 293 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:56.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:55 vm07 bash[56315]: cluster 2026-03-09T14:42:54.574541+0000 mgr.y (mgr.44103) 293 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:56.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:55 vm07 bash[56315]: cluster 2026-03-09T14:42:54.574541+0000 mgr.y (mgr.44103) 293 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:56.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:55 vm11 bash[43577]: cluster 2026-03-09T14:42:54.574541+0000 mgr.y (mgr.44103) 293 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:56.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:55 vm11 bash[43577]: cluster 2026-03-09T14:42:54.574541+0000 mgr.y (mgr.44103) 293 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:42:58.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:57 vm07 bash[55244]: cluster 2026-03-09T14:42:56.575028+0000 mgr.y (mgr.44103) 294 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:58.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:57 vm07 bash[55244]: cluster 2026-03-09T14:42:56.575028+0000 mgr.y (mgr.44103) 294 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:58.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:57 vm07 bash[56315]: cluster 2026-03-09T14:42:56.575028+0000 mgr.y (mgr.44103) 294 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:58.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:57 vm07 bash[56315]: cluster 2026-03-09T14:42:56.575028+0000 mgr.y (mgr.44103) 294 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:58.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:57 vm11 bash[43577]: cluster 2026-03-09T14:42:56.575028+0000 mgr.y (mgr.44103) 294 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:42:58.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:57 vm11 bash[43577]: cluster 2026-03-09T14:42:56.575028+0000 mgr.y (mgr.44103) 294 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:59 vm07 bash[55244]: cluster 2026-03-09T14:42:58.575374+0000 mgr.y (mgr.44103) 295 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:42:59 vm07 bash[55244]: cluster 2026-03-09T14:42:58.575374+0000 mgr.y (mgr.44103) 295 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:00.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:59 vm07 bash[56315]: cluster 2026-03-09T14:42:58.575374+0000 mgr.y (mgr.44103) 295 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:00.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:42:59 vm07 bash[56315]: cluster 2026-03-09T14:42:58.575374+0000 mgr.y (mgr.44103) 295 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:00.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:59 vm11 bash[43577]: cluster 2026-03-09T14:42:58.575374+0000 mgr.y (mgr.44103) 295 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:00.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:42:59 vm11 bash[43577]: cluster 2026-03-09T14:42:58.575374+0000 mgr.y (mgr.44103) 295 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:02.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:01 vm07 bash[55244]: cluster 2026-03-09T14:43:00.575792+0000 mgr.y (mgr.44103) 296 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:02.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:01 vm07 bash[55244]: cluster 2026-03-09T14:43:00.575792+0000 mgr.y (mgr.44103) 296 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:02.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:01 vm07 bash[56315]: cluster 2026-03-09T14:43:00.575792+0000 mgr.y (mgr.44103) 296 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:02.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:01 vm07 bash[56315]: cluster 2026-03-09T14:43:00.575792+0000 mgr.y (mgr.44103) 296 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:02.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:01 vm11 bash[43577]: cluster 2026-03-09T14:43:00.575792+0000 mgr.y (mgr.44103) 296 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:02.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:01 vm11 bash[43577]: cluster 2026-03-09T14:43:00.575792+0000 mgr.y (mgr.44103) 296 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:03.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:02 vm07 bash[55244]: audit 2026-03-09T14:43:02.170003+0000 mgr.y (mgr.44103) 297 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:03.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:02 vm07 bash[55244]: audit 2026-03-09T14:43:02.170003+0000 mgr.y (mgr.44103) 297 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:03.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:02 vm07 bash[56315]: audit 2026-03-09T14:43:02.170003+0000 mgr.y (mgr.44103) 297 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:03.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:02 vm07 bash[56315]: audit 2026-03-09T14:43:02.170003+0000 mgr.y (mgr.44103) 297 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:03.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:02 vm11 bash[43577]: audit 2026-03-09T14:43:02.170003+0000 mgr.y (mgr.44103) 297 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:03.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:02 vm11 bash[43577]: audit 2026-03-09T14:43:02.170003+0000 mgr.y (mgr.44103) 297 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:03.837 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:43:03 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:43:03] "GET /metrics HTTP/1.1" 200 38250 "" "Prometheus/2.51.0" 2026-03-09T14:43:04.153 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:03 vm07 bash[55244]: cluster 2026-03-09T14:43:02.576233+0000 mgr.y (mgr.44103) 298 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:04.153 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:03 vm07 bash[55244]: cluster 2026-03-09T14:43:02.576233+0000 mgr.y (mgr.44103) 298 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:04.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:03 vm07 bash[56315]: cluster 2026-03-09T14:43:02.576233+0000 mgr.y (mgr.44103) 298 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:04.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:03 vm07 bash[56315]: cluster 2026-03-09T14:43:02.576233+0000 mgr.y (mgr.44103) 298 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:04.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:03 vm11 bash[43577]: cluster 2026-03-09T14:43:02.576233+0000 mgr.y (mgr.44103) 298 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:04.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:03 vm11 bash[43577]: cluster 2026-03-09T14:43:02.576233+0000 mgr.y (mgr.44103) 298 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:05.185 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T14:43:05.617 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:43:05.617 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (5m) 27s ago 10m 13.7M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:43:05.617 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (32s) 26s ago 10m 53.3M - 10.4.0 c8b91775d855 63d856f6fd6e 2026-03-09T14:43:05.617 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (53s) 27s ago 9m 49.0M - 3.9 654f31e6858e fe7cab5d4b5d 2026-03-09T14:43:05.617 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 running (4m) 26s ago 12m 466M - 19.2.3-678-ge911bdeb 654f31e6858e d35dddd392d1 2026-03-09T14:43:05.617 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (5m) 27s ago 13m 545M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:43:05.617 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (4m) 27s ago 13m 54.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bcdaa5dfc948 2026-03-09T14:43:05.617 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (3m) 26s ago 13m 47.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1caba9bf8a13 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (4m) 27s ago 13m 53.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ff7dfe3a6c7c 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (5m) 27s ago 10m 7667k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (5m) 26s ago 10m 7543k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (3m) 27s ago 12m 53.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 24632814894d 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (2m) 27s ago 12m 75.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1f773b5d0f68 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (3m) 27s ago 12m 70.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7d943c2f091c 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (3m) 27s ago 12m 56.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7c234b83449a 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (2m) 26s ago 11m 54.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 811379ab4ba5 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (2m) 26s ago 11m 71.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e bc7e71aa5718 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (113s) 26s ago 11m 48.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 20bc2716b966 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (96s) 26s ago 11m 71.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2557f7ad255a 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (4m) 26s ago 10m 40.7M - 2.51.0 1d3b7f56885b e88f0339687c 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (80s) 27s ago 9m 91.6M - 19.2.3-678-ge911bdeb 654f31e6858e df702c44464d 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (78s) 26s ago 9m 91.5M - 19.2.3-678-ge911bdeb 654f31e6858e 75ca9d41b995 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (82s) 27s ago 9m 91.6M - 19.2.3-678-ge911bdeb 654f31e6858e 9a13050e9ad3 2026-03-09T14:43:05.618 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (76s) 26s ago 9m 93.5M - 19.2.3-678-ge911bdeb 654f31e6858e 3dd8df0c45b8 2026-03-09T14:43:05.668 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 17 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:43:06.131 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:43:06.142 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:05 vm07 bash[55244]: cluster 2026-03-09T14:43:04.576594+0000 mgr.y (mgr.44103) 299 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:06.142 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:05 vm07 bash[55244]: cluster 2026-03-09T14:43:04.576594+0000 mgr.y (mgr.44103) 299 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:06.142 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:05 vm07 bash[55244]: audit 2026-03-09T14:43:05.105556+0000 mgr.y (mgr.44103) 300 : audit [DBG] from='client.44559 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:06.142 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:05 vm07 bash[55244]: audit 2026-03-09T14:43:05.105556+0000 mgr.y (mgr.44103) 300 : audit [DBG] from='client.44559 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:06.142 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:05 vm07 bash[56315]: cluster 2026-03-09T14:43:04.576594+0000 mgr.y (mgr.44103) 299 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:06.143 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:05 vm07 bash[56315]: cluster 2026-03-09T14:43:04.576594+0000 mgr.y (mgr.44103) 299 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:06.143 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:05 vm07 bash[56315]: audit 2026-03-09T14:43:05.105556+0000 mgr.y (mgr.44103) 300 : audit [DBG] from='client.44559 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:06.143 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:05 vm07 bash[56315]: audit 2026-03-09T14:43:05.105556+0000 mgr.y (mgr.44103) 300 : audit [DBG] from='client.44559 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:06.182 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'echo "wait for servicemap items w/ changing names to refresh"' 2026-03-09T14:43:06.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:05 vm11 bash[43577]: cluster 2026-03-09T14:43:04.576594+0000 mgr.y (mgr.44103) 299 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:06.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:05 vm11 bash[43577]: cluster 2026-03-09T14:43:04.576594+0000 mgr.y (mgr.44103) 299 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:06.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:05 vm11 bash[43577]: audit 2026-03-09T14:43:05.105556+0000 mgr.y (mgr.44103) 300 : audit [DBG] from='client.44559 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:06.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:05 vm11 bash[43577]: audit 2026-03-09T14:43:05.105556+0000 mgr.y (mgr.44103) 300 : audit [DBG] from='client.44559 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:06.692 INFO:teuthology.orchestra.run.vm07.stdout:wait for servicemap items w/ changing names to refresh 2026-03-09T14:43:06.731 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 60' 2026-03-09T14:43:06.981 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:06 vm07 bash[55244]: audit 2026-03-09T14:43:05.622852+0000 mgr.y (mgr.44103) 301 : audit [DBG] from='client.34558 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:06.981 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:06 vm07 bash[55244]: audit 2026-03-09T14:43:05.622852+0000 mgr.y (mgr.44103) 301 : audit [DBG] from='client.34558 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:06.981 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:06 vm07 bash[55244]: audit 2026-03-09T14:43:06.140461+0000 mon.a (mon.0) 752 : audit [DBG] from='client.? 192.168.123.107:0/3048376729' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:43:06.981 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:06 vm07 bash[55244]: audit 2026-03-09T14:43:06.140461+0000 mon.a (mon.0) 752 : audit [DBG] from='client.? 192.168.123.107:0/3048376729' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:43:06.981 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:06 vm07 bash[56315]: audit 2026-03-09T14:43:05.622852+0000 mgr.y (mgr.44103) 301 : audit [DBG] from='client.34558 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:06.981 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:06 vm07 bash[56315]: audit 2026-03-09T14:43:05.622852+0000 mgr.y (mgr.44103) 301 : audit [DBG] from='client.34558 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:06.981 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:06 vm07 bash[56315]: audit 2026-03-09T14:43:06.140461+0000 mon.a (mon.0) 752 : audit [DBG] from='client.? 192.168.123.107:0/3048376729' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:43:06.981 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:06 vm07 bash[56315]: audit 2026-03-09T14:43:06.140461+0000 mon.a (mon.0) 752 : audit [DBG] from='client.? 192.168.123.107:0/3048376729' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:43:07.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:06 vm11 bash[43577]: audit 2026-03-09T14:43:05.622852+0000 mgr.y (mgr.44103) 301 : audit [DBG] from='client.34558 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:07.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:06 vm11 bash[43577]: audit 2026-03-09T14:43:05.622852+0000 mgr.y (mgr.44103) 301 : audit [DBG] from='client.34558 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:43:07.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:06 vm11 bash[43577]: audit 2026-03-09T14:43:06.140461+0000 mon.a (mon.0) 752 : audit [DBG] from='client.? 192.168.123.107:0/3048376729' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:43:07.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:06 vm11 bash[43577]: audit 2026-03-09T14:43:06.140461+0000 mon.a (mon.0) 752 : audit [DBG] from='client.? 192.168.123.107:0/3048376729' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:43:08.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:07 vm07 bash[55244]: cluster 2026-03-09T14:43:06.577035+0000 mgr.y (mgr.44103) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:08.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:07 vm07 bash[55244]: cluster 2026-03-09T14:43:06.577035+0000 mgr.y (mgr.44103) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:08.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:07 vm07 bash[55244]: audit 2026-03-09T14:43:07.576720+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:08.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:07 vm07 bash[55244]: audit 2026-03-09T14:43:07.576720+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:08.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:07 vm07 bash[56315]: cluster 2026-03-09T14:43:06.577035+0000 mgr.y (mgr.44103) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:08.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:07 vm07 bash[56315]: cluster 2026-03-09T14:43:06.577035+0000 mgr.y (mgr.44103) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:08.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:07 vm07 bash[56315]: audit 2026-03-09T14:43:07.576720+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:08.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:07 vm07 bash[56315]: audit 2026-03-09T14:43:07.576720+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:08.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:07 vm11 bash[43577]: cluster 2026-03-09T14:43:06.577035+0000 mgr.y (mgr.44103) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:08.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:07 vm11 bash[43577]: cluster 2026-03-09T14:43:06.577035+0000 mgr.y (mgr.44103) 302 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:08.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:07 vm11 bash[43577]: audit 2026-03-09T14:43:07.576720+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:08.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:07 vm11 bash[43577]: audit 2026-03-09T14:43:07.576720+0000 mon.a (mon.0) 753 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:10.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:09 vm07 bash[55244]: cluster 2026-03-09T14:43:08.577404+0000 mgr.y (mgr.44103) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:10.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:09 vm07 bash[55244]: cluster 2026-03-09T14:43:08.577404+0000 mgr.y (mgr.44103) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:10.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:09 vm07 bash[56315]: cluster 2026-03-09T14:43:08.577404+0000 mgr.y (mgr.44103) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:10.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:09 vm07 bash[56315]: cluster 2026-03-09T14:43:08.577404+0000 mgr.y (mgr.44103) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:10.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:09 vm11 bash[43577]: cluster 2026-03-09T14:43:08.577404+0000 mgr.y (mgr.44103) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:10.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:09 vm11 bash[43577]: cluster 2026-03-09T14:43:08.577404+0000 mgr.y (mgr.44103) 303 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:12.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:11 vm07 bash[55244]: cluster 2026-03-09T14:43:10.577745+0000 mgr.y (mgr.44103) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:12.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:11 vm07 bash[55244]: cluster 2026-03-09T14:43:10.577745+0000 mgr.y (mgr.44103) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:12.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:11 vm07 bash[56315]: cluster 2026-03-09T14:43:10.577745+0000 mgr.y (mgr.44103) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:12.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:11 vm07 bash[56315]: cluster 2026-03-09T14:43:10.577745+0000 mgr.y (mgr.44103) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:12.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:11 vm11 bash[43577]: cluster 2026-03-09T14:43:10.577745+0000 mgr.y (mgr.44103) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:12.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:11 vm11 bash[43577]: cluster 2026-03-09T14:43:10.577745+0000 mgr.y (mgr.44103) 304 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:12 vm07 bash[55244]: audit 2026-03-09T14:43:12.177995+0000 mgr.y (mgr.44103) 305 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:13.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:12 vm07 bash[55244]: audit 2026-03-09T14:43:12.177995+0000 mgr.y (mgr.44103) 305 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:12 vm07 bash[56315]: audit 2026-03-09T14:43:12.177995+0000 mgr.y (mgr.44103) 305 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:13.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:12 vm07 bash[56315]: audit 2026-03-09T14:43:12.177995+0000 mgr.y (mgr.44103) 305 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:12 vm11 bash[43577]: audit 2026-03-09T14:43:12.177995+0000 mgr.y (mgr.44103) 305 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:13.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:12 vm11 bash[43577]: audit 2026-03-09T14:43:12.177995+0000 mgr.y (mgr.44103) 305 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:13.880 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:43:13 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:43:13] "GET /metrics HTTP/1.1" 200 38251 "" "Prometheus/2.51.0" 2026-03-09T14:43:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:13 vm07 bash[55244]: cluster 2026-03-09T14:43:12.578190+0000 mgr.y (mgr.44103) 306 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:14.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:13 vm07 bash[55244]: cluster 2026-03-09T14:43:12.578190+0000 mgr.y (mgr.44103) 306 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:13 vm07 bash[56315]: cluster 2026-03-09T14:43:12.578190+0000 mgr.y (mgr.44103) 306 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:14.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:13 vm07 bash[56315]: cluster 2026-03-09T14:43:12.578190+0000 mgr.y (mgr.44103) 306 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:14.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:13 vm11 bash[43577]: cluster 2026-03-09T14:43:12.578190+0000 mgr.y (mgr.44103) 306 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:14.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:13 vm11 bash[43577]: cluster 2026-03-09T14:43:12.578190+0000 mgr.y (mgr.44103) 306 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:15 vm07 bash[55244]: cluster 2026-03-09T14:43:14.578614+0000 mgr.y (mgr.44103) 307 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:16.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:15 vm07 bash[55244]: cluster 2026-03-09T14:43:14.578614+0000 mgr.y (mgr.44103) 307 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:16.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:15 vm07 bash[56315]: cluster 2026-03-09T14:43:14.578614+0000 mgr.y (mgr.44103) 307 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:16.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:15 vm07 bash[56315]: cluster 2026-03-09T14:43:14.578614+0000 mgr.y (mgr.44103) 307 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:16.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:15 vm11 bash[43577]: cluster 2026-03-09T14:43:14.578614+0000 mgr.y (mgr.44103) 307 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:16.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:15 vm11 bash[43577]: cluster 2026-03-09T14:43:14.578614+0000 mgr.y (mgr.44103) 307 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:18.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:17 vm11 bash[43577]: cluster 2026-03-09T14:43:16.579108+0000 mgr.y (mgr.44103) 308 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:18.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:17 vm11 bash[43577]: cluster 2026-03-09T14:43:16.579108+0000 mgr.y (mgr.44103) 308 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:18.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:17 vm07 bash[55244]: cluster 2026-03-09T14:43:16.579108+0000 mgr.y (mgr.44103) 308 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:18.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:17 vm07 bash[55244]: cluster 2026-03-09T14:43:16.579108+0000 mgr.y (mgr.44103) 308 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:18.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:17 vm07 bash[56315]: cluster 2026-03-09T14:43:16.579108+0000 mgr.y (mgr.44103) 308 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:18.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:17 vm07 bash[56315]: cluster 2026-03-09T14:43:16.579108+0000 mgr.y (mgr.44103) 308 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:20.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:19 vm11 bash[43577]: cluster 2026-03-09T14:43:18.579428+0000 mgr.y (mgr.44103) 309 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:20.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:19 vm11 bash[43577]: cluster 2026-03-09T14:43:18.579428+0000 mgr.y (mgr.44103) 309 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:19 vm07 bash[55244]: cluster 2026-03-09T14:43:18.579428+0000 mgr.y (mgr.44103) 309 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:20.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:19 vm07 bash[55244]: cluster 2026-03-09T14:43:18.579428+0000 mgr.y (mgr.44103) 309 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:20.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:19 vm07 bash[56315]: cluster 2026-03-09T14:43:18.579428+0000 mgr.y (mgr.44103) 309 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:20.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:19 vm07 bash[56315]: cluster 2026-03-09T14:43:18.579428+0000 mgr.y (mgr.44103) 309 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:22.179 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:21 vm07 bash[55244]: cluster 2026-03-09T14:43:20.579804+0000 mgr.y (mgr.44103) 310 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:22.179 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:21 vm07 bash[55244]: cluster 2026-03-09T14:43:20.579804+0000 mgr.y (mgr.44103) 310 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:22.179 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:21 vm07 bash[56315]: cluster 2026-03-09T14:43:20.579804+0000 mgr.y (mgr.44103) 310 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:22.179 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:21 vm07 bash[56315]: cluster 2026-03-09T14:43:20.579804+0000 mgr.y (mgr.44103) 310 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:22.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:21 vm11 bash[43577]: cluster 2026-03-09T14:43:20.579804+0000 mgr.y (mgr.44103) 310 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:22.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:21 vm11 bash[43577]: cluster 2026-03-09T14:43:20.579804+0000 mgr.y (mgr.44103) 310 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:22.751 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:43:22 vm11 bash[59245]: logger=infra.usagestats t=2026-03-09T14:43:22.312712473Z level=info msg="Usage stats are ready to report" 2026-03-09T14:43:23.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:22 vm11 bash[43577]: audit 2026-03-09T14:43:22.188630+0000 mgr.y (mgr.44103) 311 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:23.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:22 vm11 bash[43577]: audit 2026-03-09T14:43:22.188630+0000 mgr.y (mgr.44103) 311 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:23.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:22 vm11 bash[43577]: audit 2026-03-09T14:43:22.576779+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:23.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:22 vm11 bash[43577]: audit 2026-03-09T14:43:22.576779+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:23.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:22 vm07 bash[55244]: audit 2026-03-09T14:43:22.188630+0000 mgr.y (mgr.44103) 311 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:23.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:22 vm07 bash[55244]: audit 2026-03-09T14:43:22.188630+0000 mgr.y (mgr.44103) 311 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:23.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:22 vm07 bash[55244]: audit 2026-03-09T14:43:22.576779+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:23.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:22 vm07 bash[55244]: audit 2026-03-09T14:43:22.576779+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:23.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:22 vm07 bash[56315]: audit 2026-03-09T14:43:22.188630+0000 mgr.y (mgr.44103) 311 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:23.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:22 vm07 bash[56315]: audit 2026-03-09T14:43:22.188630+0000 mgr.y (mgr.44103) 311 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:23.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:22 vm07 bash[56315]: audit 2026-03-09T14:43:22.576779+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:23.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:22 vm07 bash[56315]: audit 2026-03-09T14:43:22.576779+0000 mon.a (mon.0) 754 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:23.904 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:43:23 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:43:23] "GET /metrics HTTP/1.1" 200 38252 "" "Prometheus/2.51.0" 2026-03-09T14:43:24.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:23 vm11 bash[43577]: cluster 2026-03-09T14:43:22.580162+0000 mgr.y (mgr.44103) 312 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:24.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:23 vm11 bash[43577]: cluster 2026-03-09T14:43:22.580162+0000 mgr.y (mgr.44103) 312 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:24.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:23 vm07 bash[55244]: cluster 2026-03-09T14:43:22.580162+0000 mgr.y (mgr.44103) 312 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:24.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:23 vm07 bash[55244]: cluster 2026-03-09T14:43:22.580162+0000 mgr.y (mgr.44103) 312 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:24.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:23 vm07 bash[56315]: cluster 2026-03-09T14:43:22.580162+0000 mgr.y (mgr.44103) 312 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:24.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:23 vm07 bash[56315]: cluster 2026-03-09T14:43:22.580162+0000 mgr.y (mgr.44103) 312 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:26.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:25 vm11 bash[43577]: cluster 2026-03-09T14:43:24.580616+0000 mgr.y (mgr.44103) 313 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:26.252 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:25 vm11 bash[43577]: cluster 2026-03-09T14:43:24.580616+0000 mgr.y (mgr.44103) 313 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:26.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:25 vm07 bash[55244]: cluster 2026-03-09T14:43:24.580616+0000 mgr.y (mgr.44103) 313 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:26.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:25 vm07 bash[55244]: cluster 2026-03-09T14:43:24.580616+0000 mgr.y (mgr.44103) 313 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:26.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:25 vm07 bash[56315]: cluster 2026-03-09T14:43:24.580616+0000 mgr.y (mgr.44103) 313 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:26.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:25 vm07 bash[56315]: cluster 2026-03-09T14:43:24.580616+0000 mgr.y (mgr.44103) 313 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:28.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:27 vm11 bash[43577]: cluster 2026-03-09T14:43:26.581038+0000 mgr.y (mgr.44103) 314 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:28.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:27 vm11 bash[43577]: cluster 2026-03-09T14:43:26.581038+0000 mgr.y (mgr.44103) 314 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:28.403 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:27 vm07 bash[55244]: cluster 2026-03-09T14:43:26.581038+0000 mgr.y (mgr.44103) 314 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:28.403 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:27 vm07 bash[55244]: cluster 2026-03-09T14:43:26.581038+0000 mgr.y (mgr.44103) 314 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:28.403 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:27 vm07 bash[56315]: cluster 2026-03-09T14:43:26.581038+0000 mgr.y (mgr.44103) 314 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:28.403 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:27 vm07 bash[56315]: cluster 2026-03-09T14:43:26.581038+0000 mgr.y (mgr.44103) 314 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:30.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:30 vm07 bash[55244]: cluster 2026-03-09T14:43:28.581315+0000 mgr.y (mgr.44103) 315 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:30.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:30 vm07 bash[55244]: cluster 2026-03-09T14:43:28.581315+0000 mgr.y (mgr.44103) 315 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:30.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:30 vm07 bash[56315]: cluster 2026-03-09T14:43:28.581315+0000 mgr.y (mgr.44103) 315 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:30.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:30 vm07 bash[56315]: cluster 2026-03-09T14:43:28.581315+0000 mgr.y (mgr.44103) 315 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:30.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:30 vm11 bash[43577]: cluster 2026-03-09T14:43:28.581315+0000 mgr.y (mgr.44103) 315 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:30.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:30 vm11 bash[43577]: cluster 2026-03-09T14:43:28.581315+0000 mgr.y (mgr.44103) 315 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:32.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:32 vm07 bash[55244]: cluster 2026-03-09T14:43:30.581684+0000 mgr.y (mgr.44103) 316 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:32.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:32 vm07 bash[55244]: cluster 2026-03-09T14:43:30.581684+0000 mgr.y (mgr.44103) 316 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:32.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:32 vm07 bash[56315]: cluster 2026-03-09T14:43:30.581684+0000 mgr.y (mgr.44103) 316 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:32.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:32 vm07 bash[56315]: cluster 2026-03-09T14:43:30.581684+0000 mgr.y (mgr.44103) 316 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:32.501 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:32 vm11 bash[43577]: cluster 2026-03-09T14:43:30.581684+0000 mgr.y (mgr.44103) 316 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:32.502 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:32 vm11 bash[43577]: cluster 2026-03-09T14:43:30.581684+0000 mgr.y (mgr.44103) 316 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:33.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:33 vm07 bash[55244]: audit 2026-03-09T14:43:32.197219+0000 mgr.y (mgr.44103) 317 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:33.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:33 vm07 bash[55244]: audit 2026-03-09T14:43:32.197219+0000 mgr.y (mgr.44103) 317 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:33.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:33 vm07 bash[56315]: audit 2026-03-09T14:43:32.197219+0000 mgr.y (mgr.44103) 317 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:33.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:33 vm07 bash[56315]: audit 2026-03-09T14:43:32.197219+0000 mgr.y (mgr.44103) 317 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:33.501 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:33 vm11 bash[43577]: audit 2026-03-09T14:43:32.197219+0000 mgr.y (mgr.44103) 317 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:33.501 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:33 vm11 bash[43577]: audit 2026-03-09T14:43:32.197219+0000 mgr.y (mgr.44103) 317 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:33.904 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:43:33 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:43:33] "GET /metrics HTTP/1.1" 200 38252 "" "Prometheus/2.51.0" 2026-03-09T14:43:34.403 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:34 vm07 bash[55244]: cluster 2026-03-09T14:43:32.582093+0000 mgr.y (mgr.44103) 318 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:34.403 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:34 vm07 bash[55244]: cluster 2026-03-09T14:43:32.582093+0000 mgr.y (mgr.44103) 318 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:34.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:34 vm07 bash[56315]: cluster 2026-03-09T14:43:32.582093+0000 mgr.y (mgr.44103) 318 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:34.404 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:34 vm07 bash[56315]: cluster 2026-03-09T14:43:32.582093+0000 mgr.y (mgr.44103) 318 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:34.501 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:34 vm11 bash[43577]: cluster 2026-03-09T14:43:32.582093+0000 mgr.y (mgr.44103) 318 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:34.501 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:34 vm11 bash[43577]: cluster 2026-03-09T14:43:32.582093+0000 mgr.y (mgr.44103) 318 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:36.501 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:36 vm11 bash[43577]: cluster 2026-03-09T14:43:34.582610+0000 mgr.y (mgr.44103) 319 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:36.501 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:36 vm11 bash[43577]: cluster 2026-03-09T14:43:34.582610+0000 mgr.y (mgr.44103) 319 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:36.653 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:36 vm07 bash[55244]: cluster 2026-03-09T14:43:34.582610+0000 mgr.y (mgr.44103) 319 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:36.653 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:36 vm07 bash[55244]: cluster 2026-03-09T14:43:34.582610+0000 mgr.y (mgr.44103) 319 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:36.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:36 vm07 bash[56315]: cluster 2026-03-09T14:43:34.582610+0000 mgr.y (mgr.44103) 319 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:36.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:36 vm07 bash[56315]: cluster 2026-03-09T14:43:34.582610+0000 mgr.y (mgr.44103) 319 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:38.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:38 vm07 bash[55244]: cluster 2026-03-09T14:43:36.582973+0000 mgr.y (mgr.44103) 320 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:38.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:38 vm07 bash[55244]: cluster 2026-03-09T14:43:36.582973+0000 mgr.y (mgr.44103) 320 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:38.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:38 vm07 bash[55244]: audit 2026-03-09T14:43:37.577068+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:38.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:38 vm07 bash[55244]: audit 2026-03-09T14:43:37.577068+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:38.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:38 vm07 bash[56315]: cluster 2026-03-09T14:43:36.582973+0000 mgr.y (mgr.44103) 320 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:38.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:38 vm07 bash[56315]: cluster 2026-03-09T14:43:36.582973+0000 mgr.y (mgr.44103) 320 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:38.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:38 vm07 bash[56315]: audit 2026-03-09T14:43:37.577068+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:38.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:38 vm07 bash[56315]: audit 2026-03-09T14:43:37.577068+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:38.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:38 vm11 bash[43577]: cluster 2026-03-09T14:43:36.582973+0000 mgr.y (mgr.44103) 320 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:38.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:38 vm11 bash[43577]: cluster 2026-03-09T14:43:36.582973+0000 mgr.y (mgr.44103) 320 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:38.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:38 vm11 bash[43577]: audit 2026-03-09T14:43:37.577068+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:38.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:38 vm11 bash[43577]: audit 2026-03-09T14:43:37.577068+0000 mon.a (mon.0) 755 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:40.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:40 vm07 bash[55244]: cluster 2026-03-09T14:43:38.583368+0000 mgr.y (mgr.44103) 321 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:40.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:40 vm07 bash[55244]: cluster 2026-03-09T14:43:38.583368+0000 mgr.y (mgr.44103) 321 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:40.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:40 vm07 bash[56315]: cluster 2026-03-09T14:43:38.583368+0000 mgr.y (mgr.44103) 321 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:40.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:40 vm07 bash[56315]: cluster 2026-03-09T14:43:38.583368+0000 mgr.y (mgr.44103) 321 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:40.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:40 vm11 bash[43577]: cluster 2026-03-09T14:43:38.583368+0000 mgr.y (mgr.44103) 321 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:40.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:40 vm11 bash[43577]: cluster 2026-03-09T14:43:38.583368+0000 mgr.y (mgr.44103) 321 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:42.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:42 vm07 bash[55244]: cluster 2026-03-09T14:43:40.583700+0000 mgr.y (mgr.44103) 322 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:42.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:42 vm07 bash[55244]: cluster 2026-03-09T14:43:40.583700+0000 mgr.y (mgr.44103) 322 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:42.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:42 vm07 bash[56315]: cluster 2026-03-09T14:43:40.583700+0000 mgr.y (mgr.44103) 322 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:42.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:42 vm07 bash[56315]: cluster 2026-03-09T14:43:40.583700+0000 mgr.y (mgr.44103) 322 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:42.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:42 vm11 bash[43577]: cluster 2026-03-09T14:43:40.583700+0000 mgr.y (mgr.44103) 322 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:42.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:42 vm11 bash[43577]: cluster 2026-03-09T14:43:40.583700+0000 mgr.y (mgr.44103) 322 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:43.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:43 vm07 bash[55244]: audit 2026-03-09T14:43:42.205960+0000 mgr.y (mgr.44103) 323 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:43.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:43 vm07 bash[55244]: audit 2026-03-09T14:43:42.205960+0000 mgr.y (mgr.44103) 323 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:43.654 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:43:43 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:43:43] "GET /metrics HTTP/1.1" 200 38250 "" "Prometheus/2.51.0" 2026-03-09T14:43:43.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:43 vm07 bash[56315]: audit 2026-03-09T14:43:42.205960+0000 mgr.y (mgr.44103) 323 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:43.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:43 vm07 bash[56315]: audit 2026-03-09T14:43:42.205960+0000 mgr.y (mgr.44103) 323 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:43.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:43 vm11 bash[43577]: audit 2026-03-09T14:43:42.205960+0000 mgr.y (mgr.44103) 323 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:43.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:43 vm11 bash[43577]: audit 2026-03-09T14:43:42.205960+0000 mgr.y (mgr.44103) 323 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:44.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:44 vm07 bash[55244]: cluster 2026-03-09T14:43:42.584174+0000 mgr.y (mgr.44103) 324 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:44.654 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:44 vm07 bash[55244]: cluster 2026-03-09T14:43:42.584174+0000 mgr.y (mgr.44103) 324 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:44.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:44 vm07 bash[56315]: cluster 2026-03-09T14:43:42.584174+0000 mgr.y (mgr.44103) 324 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:44.654 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:44 vm07 bash[56315]: cluster 2026-03-09T14:43:42.584174+0000 mgr.y (mgr.44103) 324 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:44.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:44 vm11 bash[43577]: cluster 2026-03-09T14:43:42.584174+0000 mgr.y (mgr.44103) 324 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:44.751 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:44 vm11 bash[43577]: cluster 2026-03-09T14:43:42.584174+0000 mgr.y (mgr.44103) 324 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:46 vm07 bash[55244]: cluster 2026-03-09T14:43:44.584577+0000 mgr.y (mgr.44103) 325 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:46 vm07 bash[55244]: cluster 2026-03-09T14:43:44.584577+0000 mgr.y (mgr.44103) 325 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:46 vm07 bash[55244]: audit 2026-03-09T14:43:45.411543+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:46 vm07 bash[55244]: audit 2026-03-09T14:43:45.411543+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:46 vm07 bash[55244]: audit 2026-03-09T14:43:45.412054+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:46 vm07 bash[55244]: audit 2026-03-09T14:43:45.412054+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:46 vm07 bash[55244]: audit 2026-03-09T14:43:45.416620+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:46 vm07 bash[55244]: audit 2026-03-09T14:43:45.416620+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:46 vm07 bash[56315]: cluster 2026-03-09T14:43:44.584577+0000 mgr.y (mgr.44103) 325 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:46 vm07 bash[56315]: cluster 2026-03-09T14:43:44.584577+0000 mgr.y (mgr.44103) 325 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:46 vm07 bash[56315]: audit 2026-03-09T14:43:45.411543+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:46 vm07 bash[56315]: audit 2026-03-09T14:43:45.411543+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:46 vm07 bash[56315]: audit 2026-03-09T14:43:45.412054+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:46 vm07 bash[56315]: audit 2026-03-09T14:43:45.412054+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:46 vm07 bash[56315]: audit 2026-03-09T14:43:45.416620+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:43:46.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:46 vm07 bash[56315]: audit 2026-03-09T14:43:45.416620+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:43:47.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:46 vm11 bash[43577]: cluster 2026-03-09T14:43:44.584577+0000 mgr.y (mgr.44103) 325 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:47.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:46 vm11 bash[43577]: cluster 2026-03-09T14:43:44.584577+0000 mgr.y (mgr.44103) 325 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:47.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:46 vm11 bash[43577]: audit 2026-03-09T14:43:45.411543+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:43:47.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:46 vm11 bash[43577]: audit 2026-03-09T14:43:45.411543+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:43:47.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:46 vm11 bash[43577]: audit 2026-03-09T14:43:45.412054+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:43:47.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:46 vm11 bash[43577]: audit 2026-03-09T14:43:45.412054+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:43:47.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:46 vm11 bash[43577]: audit 2026-03-09T14:43:45.416620+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:43:47.002 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:46 vm11 bash[43577]: audit 2026-03-09T14:43:45.416620+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' 2026-03-09T14:43:47.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:47 vm07 bash[55244]: cluster 2026-03-09T14:43:46.585001+0000 mgr.y (mgr.44103) 326 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:47.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:47 vm07 bash[55244]: cluster 2026-03-09T14:43:46.585001+0000 mgr.y (mgr.44103) 326 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:47.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:47 vm07 bash[56315]: cluster 2026-03-09T14:43:46.585001+0000 mgr.y (mgr.44103) 326 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:47.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:47 vm07 bash[56315]: cluster 2026-03-09T14:43:46.585001+0000 mgr.y (mgr.44103) 326 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:48.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:47 vm11 bash[43577]: cluster 2026-03-09T14:43:46.585001+0000 mgr.y (mgr.44103) 326 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:48.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:47 vm11 bash[43577]: cluster 2026-03-09T14:43:46.585001+0000 mgr.y (mgr.44103) 326 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:50.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:49 vm11 bash[43577]: cluster 2026-03-09T14:43:48.585307+0000 mgr.y (mgr.44103) 327 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:50.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:49 vm11 bash[43577]: cluster 2026-03-09T14:43:48.585307+0000 mgr.y (mgr.44103) 327 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:50.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:49 vm07 bash[55244]: cluster 2026-03-09T14:43:48.585307+0000 mgr.y (mgr.44103) 327 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:50.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:49 vm07 bash[55244]: cluster 2026-03-09T14:43:48.585307+0000 mgr.y (mgr.44103) 327 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:50.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:49 vm07 bash[56315]: cluster 2026-03-09T14:43:48.585307+0000 mgr.y (mgr.44103) 327 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:50.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:49 vm07 bash[56315]: cluster 2026-03-09T14:43:48.585307+0000 mgr.y (mgr.44103) 327 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:52.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:51 vm11 bash[43577]: cluster 2026-03-09T14:43:50.585633+0000 mgr.y (mgr.44103) 328 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:52.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:51 vm11 bash[43577]: cluster 2026-03-09T14:43:50.585633+0000 mgr.y (mgr.44103) 328 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:52.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:51 vm07 bash[55244]: cluster 2026-03-09T14:43:50.585633+0000 mgr.y (mgr.44103) 328 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:52.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:51 vm07 bash[55244]: cluster 2026-03-09T14:43:50.585633+0000 mgr.y (mgr.44103) 328 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:52.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:51 vm07 bash[56315]: cluster 2026-03-09T14:43:50.585633+0000 mgr.y (mgr.44103) 328 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:52.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:51 vm07 bash[56315]: cluster 2026-03-09T14:43:50.585633+0000 mgr.y (mgr.44103) 328 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:53.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:52 vm11 bash[43577]: audit 2026-03-09T14:43:52.211970+0000 mgr.y (mgr.44103) 329 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:53.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:52 vm11 bash[43577]: audit 2026-03-09T14:43:52.211970+0000 mgr.y (mgr.44103) 329 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:53.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:52 vm11 bash[43577]: audit 2026-03-09T14:43:52.577114+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:53.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:52 vm11 bash[43577]: audit 2026-03-09T14:43:52.577114+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:53.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:52 vm07 bash[55244]: audit 2026-03-09T14:43:52.211970+0000 mgr.y (mgr.44103) 329 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:53.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:52 vm07 bash[55244]: audit 2026-03-09T14:43:52.211970+0000 mgr.y (mgr.44103) 329 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:53.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:52 vm07 bash[55244]: audit 2026-03-09T14:43:52.577114+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:53.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:52 vm07 bash[55244]: audit 2026-03-09T14:43:52.577114+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:53.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:52 vm07 bash[56315]: audit 2026-03-09T14:43:52.211970+0000 mgr.y (mgr.44103) 329 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:53.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:52 vm07 bash[56315]: audit 2026-03-09T14:43:52.211970+0000 mgr.y (mgr.44103) 329 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:43:53.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:52 vm07 bash[56315]: audit 2026-03-09T14:43:52.577114+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:53.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:52 vm07 bash[56315]: audit 2026-03-09T14:43:52.577114+0000 mon.a (mon.0) 759 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:43:53.730 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:43:53 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:43:53] "GET /metrics HTTP/1.1" 200 38250 "" "Prometheus/2.51.0" 2026-03-09T14:43:54.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:53 vm11 bash[43577]: cluster 2026-03-09T14:43:52.585970+0000 mgr.y (mgr.44103) 330 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:54.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:53 vm11 bash[43577]: cluster 2026-03-09T14:43:52.585970+0000 mgr.y (mgr.44103) 330 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:54.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:53 vm07 bash[55244]: cluster 2026-03-09T14:43:52.585970+0000 mgr.y (mgr.44103) 330 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:54.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:53 vm07 bash[55244]: cluster 2026-03-09T14:43:52.585970+0000 mgr.y (mgr.44103) 330 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:54.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:53 vm07 bash[56315]: cluster 2026-03-09T14:43:52.585970+0000 mgr.y (mgr.44103) 330 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:54.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:53 vm07 bash[56315]: cluster 2026-03-09T14:43:52.585970+0000 mgr.y (mgr.44103) 330 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:56.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:55 vm11 bash[43577]: cluster 2026-03-09T14:43:54.586343+0000 mgr.y (mgr.44103) 331 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:56.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:55 vm11 bash[43577]: cluster 2026-03-09T14:43:54.586343+0000 mgr.y (mgr.44103) 331 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:56.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:55 vm07 bash[55244]: cluster 2026-03-09T14:43:54.586343+0000 mgr.y (mgr.44103) 331 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:56.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:55 vm07 bash[55244]: cluster 2026-03-09T14:43:54.586343+0000 mgr.y (mgr.44103) 331 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:56.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:55 vm07 bash[56315]: cluster 2026-03-09T14:43:54.586343+0000 mgr.y (mgr.44103) 331 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:56.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:55 vm07 bash[56315]: cluster 2026-03-09T14:43:54.586343+0000 mgr.y (mgr.44103) 331 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:43:58.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:57 vm11 bash[43577]: cluster 2026-03-09T14:43:56.586886+0000 mgr.y (mgr.44103) 332 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:58.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:57 vm11 bash[43577]: cluster 2026-03-09T14:43:56.586886+0000 mgr.y (mgr.44103) 332 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:58.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:57 vm07 bash[55244]: cluster 2026-03-09T14:43:56.586886+0000 mgr.y (mgr.44103) 332 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:58.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:57 vm07 bash[55244]: cluster 2026-03-09T14:43:56.586886+0000 mgr.y (mgr.44103) 332 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:58.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:57 vm07 bash[56315]: cluster 2026-03-09T14:43:56.586886+0000 mgr.y (mgr.44103) 332 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:43:58.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:57 vm07 bash[56315]: cluster 2026-03-09T14:43:56.586886+0000 mgr.y (mgr.44103) 332 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:00.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:59 vm11 bash[43577]: cluster 2026-03-09T14:43:58.587220+0000 mgr.y (mgr.44103) 333 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:00.001 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:43:59 vm11 bash[43577]: cluster 2026-03-09T14:43:58.587220+0000 mgr.y (mgr.44103) 333 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:59 vm07 bash[55244]: cluster 2026-03-09T14:43:58.587220+0000 mgr.y (mgr.44103) 333 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:00.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:43:59 vm07 bash[55244]: cluster 2026-03-09T14:43:58.587220+0000 mgr.y (mgr.44103) 333 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:00.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:59 vm07 bash[56315]: cluster 2026-03-09T14:43:58.587220+0000 mgr.y (mgr.44103) 333 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:00.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:43:59 vm07 bash[56315]: cluster 2026-03-09T14:43:58.587220+0000 mgr.y (mgr.44103) 333 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:02.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:01 vm07 bash[55244]: cluster 2026-03-09T14:44:00.587602+0000 mgr.y (mgr.44103) 334 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:02.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:01 vm07 bash[55244]: cluster 2026-03-09T14:44:00.587602+0000 mgr.y (mgr.44103) 334 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:02.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:01 vm07 bash[56315]: cluster 2026-03-09T14:44:00.587602+0000 mgr.y (mgr.44103) 334 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:02.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:01 vm07 bash[56315]: cluster 2026-03-09T14:44:00.587602+0000 mgr.y (mgr.44103) 334 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:02.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:01 vm11 bash[43577]: cluster 2026-03-09T14:44:00.587602+0000 mgr.y (mgr.44103) 334 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:02.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:01 vm11 bash[43577]: cluster 2026-03-09T14:44:00.587602+0000 mgr.y (mgr.44103) 334 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:03.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:02 vm07 bash[55244]: audit 2026-03-09T14:44:02.222003+0000 mgr.y (mgr.44103) 335 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:03.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:02 vm07 bash[55244]: audit 2026-03-09T14:44:02.222003+0000 mgr.y (mgr.44103) 335 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:03.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:02 vm07 bash[56315]: audit 2026-03-09T14:44:02.222003+0000 mgr.y (mgr.44103) 335 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:03.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:02 vm07 bash[56315]: audit 2026-03-09T14:44:02.222003+0000 mgr.y (mgr.44103) 335 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:03.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:02 vm11 bash[43577]: audit 2026-03-09T14:44:02.222003+0000 mgr.y (mgr.44103) 335 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:03.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:02 vm11 bash[43577]: audit 2026-03-09T14:44:02.222003+0000 mgr.y (mgr.44103) 335 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:03.767 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:44:03 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:44:03] "GET /metrics HTTP/1.1" 200 38250 "" "Prometheus/2.51.0" 2026-03-09T14:44:04.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:03 vm07 bash[55244]: cluster 2026-03-09T14:44:02.588036+0000 mgr.y (mgr.44103) 336 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:04.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:03 vm07 bash[55244]: cluster 2026-03-09T14:44:02.588036+0000 mgr.y (mgr.44103) 336 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:04.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:03 vm07 bash[56315]: cluster 2026-03-09T14:44:02.588036+0000 mgr.y (mgr.44103) 336 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:04.154 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:03 vm07 bash[56315]: cluster 2026-03-09T14:44:02.588036+0000 mgr.y (mgr.44103) 336 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:04.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:03 vm11 bash[43577]: cluster 2026-03-09T14:44:02.588036+0000 mgr.y (mgr.44103) 336 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:04.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:03 vm11 bash[43577]: cluster 2026-03-09T14:44:02.588036+0000 mgr.y (mgr.44103) 336 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:06.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:05 vm07 bash[55244]: cluster 2026-03-09T14:44:04.588463+0000 mgr.y (mgr.44103) 337 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:06.154 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:05 vm07 bash[55244]: cluster 2026-03-09T14:44:04.588463+0000 mgr.y (mgr.44103) 337 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:06.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:05 vm07 bash[56315]: cluster 2026-03-09T14:44:04.588463+0000 mgr.y (mgr.44103) 337 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:06.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:05 vm07 bash[56315]: cluster 2026-03-09T14:44:04.588463+0000 mgr.y (mgr.44103) 337 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:06.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:05 vm11 bash[43577]: cluster 2026-03-09T14:44:04.588463+0000 mgr.y (mgr.44103) 337 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:06.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:05 vm11 bash[43577]: cluster 2026-03-09T14:44:04.588463+0000 mgr.y (mgr.44103) 337 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:07.048 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:alertmanager.a vm07 *:9093,9094 running (6m) 88s ago 11m 13.7M - 0.25.0 c8568f914cd2 7b5214f8e385 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:grafana.a vm11 *:3000 running (94s) 88s ago 11m 53.3M - 10.4.0 c8b91775d855 63d856f6fd6e 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:iscsi.foo.vm07.ohlmos vm07 running (115s) 88s ago 10m 49.0M - 3.9 654f31e6858e fe7cab5d4b5d 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:mgr.x vm11 *:8443,9283,8765 running (5m) 88s ago 14m 466M - 19.2.3-678-ge911bdeb 654f31e6858e d35dddd392d1 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:mgr.y vm07 *:8443,9283,8765 running (6m) 88s ago 14m 545M - 19.2.3-678-ge911bdeb 654f31e6858e bdbac6dff330 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:mon.a vm07 running (5m) 88s ago 14m 54.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e bcdaa5dfc948 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:mon.b vm11 running (5m) 88s ago 14m 47.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 1caba9bf8a13 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:mon.c vm07 running (5m) 88s ago 14m 53.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e ff7dfe3a6c7c 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.a vm07 *:9100 running (6m) 88s ago 11m 7667k - 1.7.0 72c9c2088986 16d64a9c3aa7 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:node-exporter.b vm11 *:9100 running (6m) 88s ago 11m 7543k - 1.7.0 72c9c2088986 8e368c535897 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:osd.0 vm07 running (4m) 88s ago 13m 53.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 24632814894d 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:osd.1 vm07 running (3m) 88s ago 13m 75.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 1f773b5d0f68 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:osd.2 vm07 running (4m) 88s ago 13m 70.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7d943c2f091c 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:osd.3 vm07 running (4m) 88s ago 13m 56.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 7c234b83449a 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:osd.4 vm11 running (3m) 88s ago 12m 54.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 811379ab4ba5 2026-03-09T14:44:07.475 INFO:teuthology.orchestra.run.vm07.stdout:osd.5 vm11 running (3m) 88s ago 12m 71.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e bc7e71aa5718 2026-03-09T14:44:07.476 INFO:teuthology.orchestra.run.vm07.stdout:osd.6 vm11 running (2m) 88s ago 12m 48.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 20bc2716b966 2026-03-09T14:44:07.476 INFO:teuthology.orchestra.run.vm07.stdout:osd.7 vm11 running (2m) 88s ago 12m 71.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2557f7ad255a 2026-03-09T14:44:07.476 INFO:teuthology.orchestra.run.vm07.stdout:prometheus.a vm11 *:9095 running (5m) 88s ago 11m 40.7M - 2.51.0 1d3b7f56885b e88f0339687c 2026-03-09T14:44:07.476 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm07.urmgxb vm07 *:8000 running (2m) 88s ago 10m 91.6M - 19.2.3-678-ge911bdeb 654f31e6858e df702c44464d 2026-03-09T14:44:07.476 INFO:teuthology.orchestra.run.vm07.stdout:rgw.foo.vm11.ncyump vm11 *:8000 running (2m) 88s ago 10m 91.5M - 19.2.3-678-ge911bdeb 654f31e6858e 75ca9d41b995 2026-03-09T14:44:07.476 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm07.tkkeli vm07 *:80 running (2m) 88s ago 10m 91.6M - 19.2.3-678-ge911bdeb 654f31e6858e 9a13050e9ad3 2026-03-09T14:44:07.476 INFO:teuthology.orchestra.run.vm07.stdout:rgw.smpl.vm11.ocxkef vm11 *:80 running (2m) 88s ago 10m 93.5M - 19.2.3-678-ge911bdeb 654f31e6858e 3dd8df0c45b8 2026-03-09T14:44:07.524 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-09T14:44:07.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:07 vm07 bash[55244]: cluster 2026-03-09T14:44:06.588885+0000 mgr.y (mgr.44103) 338 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:07 vm07 bash[55244]: cluster 2026-03-09T14:44:06.588885+0000 mgr.y (mgr.44103) 338 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:07 vm07 bash[55244]: audit 2026-03-09T14:44:07.478161+0000 mgr.y (mgr.44103) 339 : audit [DBG] from='client.34570 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:07 vm07 bash[55244]: audit 2026-03-09T14:44:07.478161+0000 mgr.y (mgr.44103) 339 : audit [DBG] from='client.34570 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:07 vm07 bash[55244]: audit 2026-03-09T14:44:07.577449+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:07 vm07 bash[55244]: audit 2026-03-09T14:44:07.577449+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:07 vm07 bash[56315]: cluster 2026-03-09T14:44:06.588885+0000 mgr.y (mgr.44103) 338 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:07 vm07 bash[56315]: cluster 2026-03-09T14:44:06.588885+0000 mgr.y (mgr.44103) 338 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:07 vm07 bash[56315]: audit 2026-03-09T14:44:07.478161+0000 mgr.y (mgr.44103) 339 : audit [DBG] from='client.34570 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:07 vm07 bash[56315]: audit 2026-03-09T14:44:07.478161+0000 mgr.y (mgr.44103) 339 : audit [DBG] from='client.34570 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:07 vm07 bash[56315]: audit 2026-03-09T14:44:07.577449+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:44:07.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:07 vm07 bash[56315]: audit 2026-03-09T14:44:07.577449+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: "mon": { 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: "mgr": { 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: "osd": { 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: "rgw": { 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 4 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: }, 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: "overall": { 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 17 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout: } 2026-03-09T14:44:07.984 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:44:08.038 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-09T14:44:08.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:07 vm11 bash[43577]: cluster 2026-03-09T14:44:06.588885+0000 mgr.y (mgr.44103) 338 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:08.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:07 vm11 bash[43577]: cluster 2026-03-09T14:44:06.588885+0000 mgr.y (mgr.44103) 338 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:08.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:07 vm11 bash[43577]: audit 2026-03-09T14:44:07.478161+0000 mgr.y (mgr.44103) 339 : audit [DBG] from='client.34570 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:08.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:07 vm11 bash[43577]: audit 2026-03-09T14:44:07.478161+0000 mgr.y (mgr.44103) 339 : audit [DBG] from='client.34570 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:08.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:07 vm11 bash[43577]: audit 2026-03-09T14:44:07.577449+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:44:08.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:07 vm11 bash[43577]: audit 2026-03-09T14:44:07.577449+0000 mon.a (mon.0) 760 : audit [DBG] from='mgr.44103 192.168.123.107:0/807608323' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:44:08.472 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:44:08.472 INFO:teuthology.orchestra.run.vm07.stdout: "target_image": null, 2026-03-09T14:44:08.472 INFO:teuthology.orchestra.run.vm07.stdout: "in_progress": false, 2026-03-09T14:44:08.472 INFO:teuthology.orchestra.run.vm07.stdout: "which": "", 2026-03-09T14:44:08.472 INFO:teuthology.orchestra.run.vm07.stdout: "services_complete": [], 2026-03-09T14:44:08.472 INFO:teuthology.orchestra.run.vm07.stdout: "progress": null, 2026-03-09T14:44:08.472 INFO:teuthology.orchestra.run.vm07.stdout: "message": "", 2026-03-09T14:44:08.472 INFO:teuthology.orchestra.run.vm07.stdout: "is_paused": false 2026-03-09T14:44:08.472 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:44:08.523 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-09T14:44:08.904 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:08 vm07 bash[55244]: audit 2026-03-09T14:44:07.992539+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.107:0/852148132' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:08 vm07 bash[55244]: audit 2026-03-09T14:44:07.992539+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.107:0/852148132' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:08 vm07 bash[55244]: audit 2026-03-09T14:44:08.480389+0000 mgr.y (mgr.44103) 340 : audit [DBG] from='client.34582 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:08.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:08 vm07 bash[55244]: audit 2026-03-09T14:44:08.480389+0000 mgr.y (mgr.44103) 340 : audit [DBG] from='client.34582 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:08 vm07 bash[56315]: audit 2026-03-09T14:44:07.992539+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.107:0/852148132' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:08 vm07 bash[56315]: audit 2026-03-09T14:44:07.992539+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.107:0/852148132' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:08 vm07 bash[56315]: audit 2026-03-09T14:44:08.480389+0000 mgr.y (mgr.44103) 340 : audit [DBG] from='client.34582 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:08.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:08 vm07 bash[56315]: audit 2026-03-09T14:44:08.480389+0000 mgr.y (mgr.44103) 340 : audit [DBG] from='client.34582 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:08.994 INFO:teuthology.orchestra.run.vm07.stdout:HEALTH_OK 2026-03-09T14:44:09.049 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | length == 1'"'"'' 2026-03-09T14:44:09.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:08 vm11 bash[43577]: audit 2026-03-09T14:44:07.992539+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.107:0/852148132' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:09.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:08 vm11 bash[43577]: audit 2026-03-09T14:44:07.992539+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.107:0/852148132' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:09.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:08 vm11 bash[43577]: audit 2026-03-09T14:44:08.480389+0000 mgr.y (mgr.44103) 340 : audit [DBG] from='client.34582 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:09.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:08 vm11 bash[43577]: audit 2026-03-09T14:44:08.480389+0000 mgr.y (mgr.44103) 340 : audit [DBG] from='client.34582 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:09.538 INFO:teuthology.orchestra.run.vm07.stdout:true 2026-03-09T14:44:09.599 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | keys'"'"' | grep $sha1' 2026-03-09T14:44:09.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:09 vm07 bash[56315]: cluster 2026-03-09T14:44:08.589160+0000 mgr.y (mgr.44103) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:09.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:09 vm07 bash[56315]: cluster 2026-03-09T14:44:08.589160+0000 mgr.y (mgr.44103) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:09.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:09 vm07 bash[56315]: audit 2026-03-09T14:44:09.002750+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.107:0/536520059' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:44:09.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:09 vm07 bash[56315]: audit 2026-03-09T14:44:09.002750+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.107:0/536520059' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:44:09.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:09 vm07 bash[56315]: audit 2026-03-09T14:44:09.533523+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.107:0/690493683' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:09.791 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:09 vm07 bash[56315]: audit 2026-03-09T14:44:09.533523+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.107:0/690493683' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:09 vm07 bash[55244]: cluster 2026-03-09T14:44:08.589160+0000 mgr.y (mgr.44103) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:09 vm07 bash[55244]: cluster 2026-03-09T14:44:08.589160+0000 mgr.y (mgr.44103) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:09 vm07 bash[55244]: audit 2026-03-09T14:44:09.002750+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.107:0/536520059' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:44:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:09 vm07 bash[55244]: audit 2026-03-09T14:44:09.002750+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.107:0/536520059' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:44:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:09 vm07 bash[55244]: audit 2026-03-09T14:44:09.533523+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.107:0/690493683' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:09.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:09 vm07 bash[55244]: audit 2026-03-09T14:44:09.533523+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.107:0/690493683' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:10.072 INFO:teuthology.orchestra.run.vm07.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-09T14:44:10.110 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ls | grep '"'"'^osd '"'"'' 2026-03-09T14:44:10.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:09 vm11 bash[43577]: cluster 2026-03-09T14:44:08.589160+0000 mgr.y (mgr.44103) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:10.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:09 vm11 bash[43577]: cluster 2026-03-09T14:44:08.589160+0000 mgr.y (mgr.44103) 341 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:10.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:09 vm11 bash[43577]: audit 2026-03-09T14:44:09.002750+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.107:0/536520059' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:44:10.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:09 vm11 bash[43577]: audit 2026-03-09T14:44:09.002750+0000 mon.c (mon.1) 27 : audit [DBG] from='client.? 192.168.123.107:0/536520059' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-09T14:44:10.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:09 vm11 bash[43577]: audit 2026-03-09T14:44:09.533523+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.107:0/690493683' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:10.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:09 vm11 bash[43577]: audit 2026-03-09T14:44:09.533523+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.107:0/690493683' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:10.523 INFO:teuthology.orchestra.run.vm07.stdout:osd 8 91s ago - 2026-03-09T14:44:10.562 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T14:44:10.564 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm07.local 2026-03-09T14:44:10.565 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- bash -c 'ceph orch upgrade ls' 2026-03-09T14:44:10.904 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:10 vm07 bash[56315]: audit 2026-03-09T14:44:10.070842+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.107:0/1949985226' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:10.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:10 vm07 bash[56315]: audit 2026-03-09T14:44:10.070842+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.107:0/1949985226' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:10.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:10 vm07 bash[55244]: audit 2026-03-09T14:44:10.070842+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.107:0/1949985226' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:10.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:10 vm07 bash[55244]: audit 2026-03-09T14:44:10.070842+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.107:0/1949985226' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:11.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:10 vm11 bash[43577]: audit 2026-03-09T14:44:10.070842+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.107:0/1949985226' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:11.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:10 vm11 bash[43577]: audit 2026-03-09T14:44:10.070842+0000 mon.a (mon.0) 761 : audit [DBG] from='client.? 192.168.123.107:0/1949985226' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:11 vm07 bash[56315]: audit 2026-03-09T14:44:10.519728+0000 mgr.y (mgr.44103) 342 : audit [DBG] from='client.34597 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:11 vm07 bash[56315]: audit 2026-03-09T14:44:10.519728+0000 mgr.y (mgr.44103) 342 : audit [DBG] from='client.34597 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:11 vm07 bash[56315]: cluster 2026-03-09T14:44:10.589565+0000 mgr.y (mgr.44103) 343 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:11 vm07 bash[56315]: cluster 2026-03-09T14:44:10.589565+0000 mgr.y (mgr.44103) 343 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:11 vm07 bash[56315]: audit 2026-03-09T14:44:11.007751+0000 mgr.y (mgr.44103) 344 : audit [DBG] from='client.54473 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:11 vm07 bash[56315]: audit 2026-03-09T14:44:11.007751+0000 mgr.y (mgr.44103) 344 : audit [DBG] from='client.54473 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:11 vm07 bash[55244]: audit 2026-03-09T14:44:10.519728+0000 mgr.y (mgr.44103) 342 : audit [DBG] from='client.34597 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:11 vm07 bash[55244]: audit 2026-03-09T14:44:10.519728+0000 mgr.y (mgr.44103) 342 : audit [DBG] from='client.34597 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:11 vm07 bash[55244]: cluster 2026-03-09T14:44:10.589565+0000 mgr.y (mgr.44103) 343 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:11 vm07 bash[55244]: cluster 2026-03-09T14:44:10.589565+0000 mgr.y (mgr.44103) 343 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:11 vm07 bash[55244]: audit 2026-03-09T14:44:11.007751+0000 mgr.y (mgr.44103) 344 : audit [DBG] from='client.54473 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:11 vm07 bash[55244]: audit 2026-03-09T14:44:11.007751+0000 mgr.y (mgr.44103) 344 : audit [DBG] from='client.54473 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:11 vm11 bash[43577]: audit 2026-03-09T14:44:10.519728+0000 mgr.y (mgr.44103) 342 : audit [DBG] from='client.34597 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:11 vm11 bash[43577]: audit 2026-03-09T14:44:10.519728+0000 mgr.y (mgr.44103) 342 : audit [DBG] from='client.34597 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:11 vm11 bash[43577]: cluster 2026-03-09T14:44:10.589565+0000 mgr.y (mgr.44103) 343 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:12.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:11 vm11 bash[43577]: cluster 2026-03-09T14:44:10.589565+0000 mgr.y (mgr.44103) 343 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:12.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:11 vm11 bash[43577]: audit 2026-03-09T14:44:11.007751+0000 mgr.y (mgr.44103) 344 : audit [DBG] from='client.54473 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:11 vm11 bash[43577]: audit 2026-03-09T14:44:11.007751+0000 mgr.y (mgr.44103) 344 : audit [DBG] from='client.54473 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:12.427 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-09T14:44:12.427 INFO:teuthology.orchestra.run.vm07.stdout: "image": "quay.io/ceph/ceph", 2026-03-09T14:44:12.427 INFO:teuthology.orchestra.run.vm07.stdout: "registry": "quay.io", 2026-03-09T14:44:12.427 INFO:teuthology.orchestra.run.vm07.stdout: "bare_image": "ceph/ceph", 2026-03-09T14:44:12.427 INFO:teuthology.orchestra.run.vm07.stdout: "versions": [ 2026-03-09T14:44:12.427 INFO:teuthology.orchestra.run.vm07.stdout: "20.2.0", 2026-03-09T14:44:12.428 INFO:teuthology.orchestra.run.vm07.stdout: "20.1.1", 2026-03-09T14:44:12.428 INFO:teuthology.orchestra.run.vm07.stdout: "20.1.0", 2026-03-09T14:44:12.428 INFO:teuthology.orchestra.run.vm07.stdout: "19.2.3", 2026-03-09T14:44:12.428 INFO:teuthology.orchestra.run.vm07.stdout: "19.2.2", 2026-03-09T14:44:12.428 INFO:teuthology.orchestra.run.vm07.stdout: "19.2.1", 2026-03-09T14:44:12.428 INFO:teuthology.orchestra.run.vm07.stdout: "19.2.0" 2026-03-09T14:44:12.428 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-09T14:44:12.428 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-09T14:44:12.478 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- bash -c 'ceph orch upgrade ls --image quay.io/ceph/ceph --show-all-versions | grep 16.2.0' 2026-03-09T14:44:12.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:12 vm07 bash[55244]: audit 2026-03-09T14:44:12.230908+0000 mgr.y (mgr.44103) 345 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:12.905 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:12 vm07 bash[55244]: audit 2026-03-09T14:44:12.230908+0000 mgr.y (mgr.44103) 345 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:12.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:12 vm07 bash[56315]: audit 2026-03-09T14:44:12.230908+0000 mgr.y (mgr.44103) 345 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:12.905 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:12 vm07 bash[56315]: audit 2026-03-09T14:44:12.230908+0000 mgr.y (mgr.44103) 345 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:13.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:12 vm11 bash[43577]: audit 2026-03-09T14:44:12.230908+0000 mgr.y (mgr.44103) 345 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:13.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:12 vm11 bash[43577]: audit 2026-03-09T14:44:12.230908+0000 mgr.y (mgr.44103) 345 : audit [DBG] from='client.34456 -' entity='client.iscsi.foo.vm07.ohlmos' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:44:13.808 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:44:13 vm07 bash[52213]: ::ffff:192.168.123.111 - - [09/Mar/2026:14:44:13] "GET /metrics HTTP/1.1" 200 38253 "" "Prometheus/2.51.0" 2026-03-09T14:44:14.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:13 vm07 bash[56315]: cluster 2026-03-09T14:44:12.590000+0000 mgr.y (mgr.44103) 346 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:14.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:13 vm07 bash[56315]: cluster 2026-03-09T14:44:12.590000+0000 mgr.y (mgr.44103) 346 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:14.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:13 vm07 bash[56315]: audit 2026-03-09T14:44:12.911691+0000 mgr.y (mgr.44103) 347 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:14.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:13 vm07 bash[56315]: audit 2026-03-09T14:44:12.911691+0000 mgr.y (mgr.44103) 347 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:14.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:13 vm07 bash[55244]: cluster 2026-03-09T14:44:12.590000+0000 mgr.y (mgr.44103) 346 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:14.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:13 vm07 bash[55244]: cluster 2026-03-09T14:44:12.590000+0000 mgr.y (mgr.44103) 346 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:14.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:13 vm07 bash[55244]: audit 2026-03-09T14:44:12.911691+0000 mgr.y (mgr.44103) 347 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:14.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:13 vm07 bash[55244]: audit 2026-03-09T14:44:12.911691+0000 mgr.y (mgr.44103) 347 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:14.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:13 vm11 bash[43577]: cluster 2026-03-09T14:44:12.590000+0000 mgr.y (mgr.44103) 346 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:14.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:13 vm11 bash[43577]: cluster 2026-03-09T14:44:12.590000+0000 mgr.y (mgr.44103) 346 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:44:14.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:13 vm11 bash[43577]: audit 2026-03-09T14:44:12.911691+0000 mgr.y (mgr.44103) 347 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:14.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:13 vm11 bash[43577]: audit 2026-03-09T14:44:12.911691+0000 mgr.y (mgr.44103) 347 : audit [DBG] from='client.34606 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:14.380 INFO:teuthology.orchestra.run.vm07.stdout: "16.2.0", 2026-03-09T14:44:14.421 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- bash -c 'ceph orch upgrade ls --image quay.io/ceph/ceph --tags | grep v16.2.2' 2026-03-09T14:44:16.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:15 vm07 bash[55244]: cluster 2026-03-09T14:44:14.590391+0000 mgr.y (mgr.44103) 348 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:16.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:15 vm07 bash[55244]: cluster 2026-03-09T14:44:14.590391+0000 mgr.y (mgr.44103) 348 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:16.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:15 vm07 bash[55244]: audit 2026-03-09T14:44:14.846676+0000 mgr.y (mgr.44103) 349 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:16.155 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:15 vm07 bash[55244]: audit 2026-03-09T14:44:14.846676+0000 mgr.y (mgr.44103) 349 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:16.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:15 vm07 bash[56315]: cluster 2026-03-09T14:44:14.590391+0000 mgr.y (mgr.44103) 348 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:16.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:15 vm07 bash[56315]: cluster 2026-03-09T14:44:14.590391+0000 mgr.y (mgr.44103) 348 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:16.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:15 vm07 bash[56315]: audit 2026-03-09T14:44:14.846676+0000 mgr.y (mgr.44103) 349 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:16.155 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:15 vm07 bash[56315]: audit 2026-03-09T14:44:14.846676+0000 mgr.y (mgr.44103) 349 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:16.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:15 vm11 bash[43577]: cluster 2026-03-09T14:44:14.590391+0000 mgr.y (mgr.44103) 348 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:16.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:15 vm11 bash[43577]: cluster 2026-03-09T14:44:14.590391+0000 mgr.y (mgr.44103) 348 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 291 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:44:16.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:15 vm11 bash[43577]: audit 2026-03-09T14:44:14.846676+0000 mgr.y (mgr.44103) 349 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:16.251 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 09 14:44:15 vm11 bash[43577]: audit 2026-03-09T14:44:14.846676+0000 mgr.y (mgr.44103) 349 : audit [DBG] from='client.54482 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:44:16.315 INFO:teuthology.orchestra.run.vm07.stdout: "v16.2.2", 2026-03-09T14:44:16.316 INFO:teuthology.orchestra.run.vm07.stdout: "v16.2.2-20210505", 2026-03-09T14:44:16.357 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T14:44:16.359 INFO:tasks.cephadm:Teardown begin 2026-03-09T14:44:16.359 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:44:16.367 DEBUG:teuthology.orchestra.run.vm11:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:44:16.389 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T14:44:16.389 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 -- ceph mgr module disable cephadm 2026-03-09T14:44:16.706 INFO:teuthology.orchestra.run.vm07.stderr:Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',) 2026-03-09T14:44:16.742 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:44:16.743 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T14:44:16.743 DEBUG:teuthology.orchestra.run.vm07:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T14:44:16.745 DEBUG:teuthology.orchestra.run.vm11:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T14:44:16.748 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T14:44:16.748 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T14:44:16.748 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.a 2026-03-09T14:44:16.843 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:16 vm07 systemd[1]: Stopping Ceph mon.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:44:17.027 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:16 vm07 bash[56315]: debug 2026-03-09T14:44:16.834+0000 7f49c1996640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T14:44:17.027 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:16 vm07 bash[56315]: debug 2026-03-09T14:44:16.834+0000 7f49c1996640 -1 mon.a@0(leader) e4 *** Got Signal Terminated *** 2026-03-09T14:44:17.027 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:44:16 vm07 bash[52213]: [09/Mar/2026:14:44:16] ENGINE Bus STOPPING 2026-03-09T14:44:17.027 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:44:16 vm07 bash[52213]: [09/Mar/2026:14:44:16] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:44:17.027 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:44:16 vm07 bash[52213]: [09/Mar/2026:14:44:16] ENGINE Bus STOPPED 2026-03-09T14:44:17.027 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:44:16 vm07 bash[52213]: [09/Mar/2026:14:44:16] ENGINE Bus STARTING 2026-03-09T14:44:17.027 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:44:17 vm07 bash[52213]: [09/Mar/2026:14:44:17] ENGINE Serving on http://:::9283 2026-03-09T14:44:17.027 INFO:journalctl@ceph.mgr.y.vm07.stdout:Mar 09 14:44:17 vm07 bash[52213]: [09/Mar/2026:14:44:17] ENGINE Bus STARTED 2026-03-09T14:44:17.089 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.a.service' 2026-03-09T14:44:17.094 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:17 vm07 bash[79775]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-mon-a 2026-03-09T14:44:17.094 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:17 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.a.service: Deactivated successfully. 2026-03-09T14:44:17.094 INFO:journalctl@ceph.mon.a.vm07.stdout:Mar 09 14:44:17 vm07 systemd[1]: Stopped Ceph mon.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:44:17.102 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:17.102 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T14:44:17.102 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-09T14:44:17.102 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.c 2026-03-09T14:44:17.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:17 vm07 systemd[1]: Stopping Ceph mon.c for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:44:17.404 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:17 vm07 bash[55244]: debug 2026-03-09T14:44:17.194+0000 7f773f4c8640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T14:44:17.405 INFO:journalctl@ceph.mon.c.vm07.stdout:Mar 09 14:44:17 vm07 bash[55244]: debug 2026-03-09T14:44:17.194+0000 7f773f4c8640 -1 mon.c@1(peon) e4 *** Got Signal Terminated *** 2026-03-09T14:44:17.480 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.c.service' 2026-03-09T14:44:17.491 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:17.492 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-09T14:44:17.492 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-09T14:44:17.492 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.b 2026-03-09T14:44:17.729 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mon.b.service' 2026-03-09T14:44:17.741 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:17.741 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-09T14:44:17.741 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-09T14:44:17.741 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.y 2026-03-09T14:44:17.900 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.y.service' 2026-03-09T14:44:17.911 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:17.911 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-09T14:44:17.911 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-09T14:44:17.911 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.x 2026-03-09T14:44:18.036 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@mgr.x.service' 2026-03-09T14:44:18.047 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:18.047 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-09T14:44:18.047 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T14:44:18.047 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.0 2026-03-09T14:44:18.251 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:44:18 vm11 bash[41290]: ts=2026-03-09T14:44:18.093Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T14:44:18.251 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:44:18 vm11 bash[41290]: ts=2026-03-09T14:44:18.093Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T14:44:18.251 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:44:18 vm11 bash[41290]: ts=2026-03-09T14:44:18.093Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T14:44:18.251 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:44:18 vm11 bash[41290]: ts=2026-03-09T14:44:18.093Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T14:44:18.251 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:44:18 vm11 bash[41290]: ts=2026-03-09T14:44:18.093Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T14:44:18.251 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 09 14:44:18 vm11 bash[41290]: ts=2026-03-09T14:44:18.093Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-09T14:44:18.405 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:44:18 vm07 systemd[1]: Stopping Ceph osd.0 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:44:18.405 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:44:18 vm07 bash[63469]: debug 2026-03-09T14:44:18.094+0000 7f65f04f5640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:44:18.405 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:44:18 vm07 bash[63469]: debug 2026-03-09T14:44:18.094+0000 7f65f04f5640 -1 osd.0 139 *** Got signal Terminated *** 2026-03-09T14:44:18.405 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:44:18 vm07 bash[63469]: debug 2026-03-09T14:44:18.094+0000 7f65f04f5640 -1 osd.0 139 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:44:23.389 INFO:journalctl@ceph.osd.0.vm07.stdout:Mar 09 14:44:23 vm07 bash[80046]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-0 2026-03-09T14:44:23.416 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.0.service' 2026-03-09T14:44:23.441 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:23.441 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T14:44:23.442 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T14:44:23.442 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.1 2026-03-09T14:44:23.655 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:44:23 vm07 systemd[1]: Stopping Ceph osd.1 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:44:23.655 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:44:23 vm07 bash[65524]: debug 2026-03-09T14:44:23.534+0000 7f75c80a4640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:44:23.655 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:44:23 vm07 bash[65524]: debug 2026-03-09T14:44:23.534+0000 7f75c80a4640 -1 osd.1 139 *** Got signal Terminated *** 2026-03-09T14:44:23.655 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:44:23 vm07 bash[65524]: debug 2026-03-09T14:44:23.534+0000 7f75c80a4640 -1 osd.1 139 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:44:28.865 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 09 14:44:28 vm07 bash[80226]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-1 2026-03-09T14:44:28.929 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.1.service' 2026-03-09T14:44:28.942 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:28.942 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T14:44:28.942 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T14:44:28.942 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.2 2026-03-09T14:44:29.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:44:29 vm07 systemd[1]: Stopping Ceph osd.2 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:44:29.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:44:29 vm07 bash[61284]: debug 2026-03-09T14:44:29.030+0000 7ff0eab39640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:44:29.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:44:29 vm07 bash[61284]: debug 2026-03-09T14:44:29.030+0000 7ff0eab39640 -1 osd.2 139 *** Got signal Terminated *** 2026-03-09T14:44:29.155 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:44:29 vm07 bash[61284]: debug 2026-03-09T14:44:29.030+0000 7ff0eab39640 -1 osd.2 139 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:44:34.359 INFO:journalctl@ceph.osd.2.vm07.stdout:Mar 09 14:44:34 vm07 bash[80417]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-2 2026-03-09T14:44:34.405 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.2.service' 2026-03-09T14:44:34.415 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:34.415 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T14:44:34.415 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T14:44:34.415 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.3 2026-03-09T14:44:34.655 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:44:34 vm07 systemd[1]: Stopping Ceph osd.3 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:44:34.655 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:44:34 vm07 bash[59212]: debug 2026-03-09T14:44:34.505+0000 7f752bdf5640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:44:34.655 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:44:34 vm07 bash[59212]: debug 2026-03-09T14:44:34.505+0000 7f752bdf5640 -1 osd.3 139 *** Got signal Terminated *** 2026-03-09T14:44:34.655 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:44:34 vm07 bash[59212]: debug 2026-03-09T14:44:34.505+0000 7f752bdf5640 -1 osd.3 139 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:44:39.834 INFO:journalctl@ceph.osd.3.vm07.stdout:Mar 09 14:44:39 vm07 bash[80598]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-3 2026-03-09T14:44:39.869 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.3.service' 2026-03-09T14:44:39.880 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:39.880 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T14:44:39.880 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T14:44:39.880 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.4 2026-03-09T14:44:40.251 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:44:39 vm11 systemd[1]: Stopping Ceph osd.4 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:44:40.251 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:44:39 vm11 bash[45819]: debug 2026-03-09T14:44:39.933+0000 7fd77a1f2640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:44:40.251 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:44:39 vm11 bash[45819]: debug 2026-03-09T14:44:39.933+0000 7fd77a1f2640 -1 osd.4 139 *** Got signal Terminated *** 2026-03-09T14:44:40.251 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:44:39 vm11 bash[45819]: debug 2026-03-09T14:44:39.933+0000 7fd77a1f2640 -1 osd.4 139 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:44:45.071 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:44:44 vm11 bash[45819]: debug 2026-03-09T14:44:44.821+0000 7fd77680b640 -1 osd.4 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:20.630140+0000 front 2026-03-09T14:44:20.630233+0000 (oldest deadline 2026-03-09T14:44:44.129912+0000) 2026-03-09T14:44:45.071 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 09 14:44:45 vm11 bash[62585]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-4 2026-03-09T14:44:45.284 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.4.service' 2026-03-09T14:44:45.295 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:45.295 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T14:44:45.295 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T14:44:45.295 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.5 2026-03-09T14:44:45.707 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:44:45 vm11 systemd[1]: Stopping Ceph osd.5 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:44:45.707 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:44:45 vm11 bash[47833]: debug 2026-03-09T14:44:45.377+0000 7f6e8db6f640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:44:45.707 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:44:45 vm11 bash[47833]: debug 2026-03-09T14:44:45.377+0000 7f6e8db6f640 -1 osd.5 139 *** Got signal Terminated *** 2026-03-09T14:44:45.708 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:44:45 vm11 bash[47833]: debug 2026-03-09T14:44:45.377+0000 7f6e8db6f640 -1 osd.5 139 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:44:46.001 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:44:45 vm11 bash[47833]: debug 2026-03-09T14:44:45.709+0000 7f6e89987640 -1 osd.5 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:20.427584+0000 front 2026-03-09T14:44:20.427817+0000 (oldest deadline 2026-03-09T14:44:45.127290+0000) 2026-03-09T14:44:46.751 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:46 vm11 bash[49856]: debug 2026-03-09T14:44:46.457+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.936294+0000 front 2026-03-09T14:44:21.936000+0000 (oldest deadline 2026-03-09T14:44:46.035856+0000) 2026-03-09T14:44:46.751 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:44:46 vm11 bash[47833]: debug 2026-03-09T14:44:46.697+0000 7f6e89987640 -1 osd.5 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:20.427584+0000 front 2026-03-09T14:44:20.427817+0000 (oldest deadline 2026-03-09T14:44:45.127290+0000) 2026-03-09T14:44:47.250 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:46 vm11 bash[51863]: debug 2026-03-09T14:44:46.969+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:47.736 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:47 vm11 bash[49856]: debug 2026-03-09T14:44:47.481+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.936294+0000 front 2026-03-09T14:44:21.936000+0000 (oldest deadline 2026-03-09T14:44:46.035856+0000) 2026-03-09T14:44:48.000 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:44:47 vm11 bash[47833]: debug 2026-03-09T14:44:47.741+0000 7f6e89987640 -1 osd.5 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:20.427584+0000 front 2026-03-09T14:44:20.427817+0000 (oldest deadline 2026-03-09T14:44:45.127290+0000) 2026-03-09T14:44:48.456 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:48 vm11 bash[51863]: debug 2026-03-09T14:44:48.005+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:49.250 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:48 vm11 bash[51863]: debug 2026-03-09T14:44:48.977+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:49.250 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:48 vm11 bash[49856]: debug 2026-03-09T14:44:48.457+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.936294+0000 front 2026-03-09T14:44:21.936000+0000 (oldest deadline 2026-03-09T14:44:46.035856+0000) 2026-03-09T14:44:49.251 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:44:48 vm11 bash[47833]: debug 2026-03-09T14:44:48.761+0000 7f6e89987640 -1 osd.5 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:20.427584+0000 front 2026-03-09T14:44:20.427817+0000 (oldest deadline 2026-03-09T14:44:45.127290+0000) 2026-03-09T14:44:49.750 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:49 vm11 bash[49856]: debug 2026-03-09T14:44:49.417+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.936294+0000 front 2026-03-09T14:44:21.936000+0000 (oldest deadline 2026-03-09T14:44:46.035856+0000) 2026-03-09T14:44:50.250 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:44:49 vm11 bash[47833]: debug 2026-03-09T14:44:49.765+0000 7f6e89987640 -1 osd.5 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:20.427584+0000 front 2026-03-09T14:44:20.427817+0000 (oldest deadline 2026-03-09T14:44:45.127290+0000) 2026-03-09T14:44:50.250 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:49 vm11 bash[51863]: debug 2026-03-09T14:44:49.945+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:50.682 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:50 vm11 bash[49856]: debug 2026-03-09T14:44:50.417+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.936294+0000 front 2026-03-09T14:44:21.936000+0000 (oldest deadline 2026-03-09T14:44:46.035856+0000) 2026-03-09T14:44:50.682 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:50 vm11 bash[49856]: debug 2026-03-09T14:44:50.417+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:26.036144+0000 front 2026-03-09T14:44:26.036334+0000 (oldest deadline 2026-03-09T14:44:49.536070+0000) 2026-03-09T14:44:50.682 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 09 14:44:50 vm11 bash[62775]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-5 2026-03-09T14:44:50.726 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.5.service' 2026-03-09T14:44:50.738 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:50.738 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T14:44:50.738 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-09T14:44:50.738 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.6 2026-03-09T14:44:51.000 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:50 vm11 bash[51863]: debug 2026-03-09T14:44:50.909+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:51.001 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:50 vm11 systemd[1]: Stopping Ceph osd.6 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:44:51.001 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:50 vm11 bash[49856]: debug 2026-03-09T14:44:50.825+0000 7f9f3a12e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:44:51.001 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:50 vm11 bash[49856]: debug 2026-03-09T14:44:50.825+0000 7f9f3a12e640 -1 osd.6 139 *** Got signal Terminated *** 2026-03-09T14:44:51.001 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:50 vm11 bash[49856]: debug 2026-03-09T14:44:50.825+0000 7f9f3a12e640 -1 osd.6 139 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:44:51.751 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:51 vm11 bash[49856]: debug 2026-03-09T14:44:51.393+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.936294+0000 front 2026-03-09T14:44:21.936000+0000 (oldest deadline 2026-03-09T14:44:46.035856+0000) 2026-03-09T14:44:51.751 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:51 vm11 bash[49856]: debug 2026-03-09T14:44:51.393+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:26.036144+0000 front 2026-03-09T14:44:26.036334+0000 (oldest deadline 2026-03-09T14:44:49.536070+0000) 2026-03-09T14:44:52.251 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:51 vm11 bash[51863]: debug 2026-03-09T14:44:51.897+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:52.751 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:52 vm11 bash[49856]: debug 2026-03-09T14:44:52.437+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.936294+0000 front 2026-03-09T14:44:21.936000+0000 (oldest deadline 2026-03-09T14:44:46.035856+0000) 2026-03-09T14:44:52.751 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:52 vm11 bash[49856]: debug 2026-03-09T14:44:52.437+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:26.036144+0000 front 2026-03-09T14:44:26.036334+0000 (oldest deadline 2026-03-09T14:44:49.536070+0000) 2026-03-09T14:44:53.250 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:52 vm11 bash[51863]: debug 2026-03-09T14:44:52.929+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:53.251 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:52 vm11 bash[51863]: debug 2026-03-09T14:44:52.929+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:27.880391+0000 front 2026-03-09T14:44:27.880415+0000 (oldest deadline 2026-03-09T14:44:52.579975+0000) 2026-03-09T14:44:53.751 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:53 vm11 bash[49856]: debug 2026-03-09T14:44:53.397+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.936294+0000 front 2026-03-09T14:44:21.936000+0000 (oldest deadline 2026-03-09T14:44:46.035856+0000) 2026-03-09T14:44:53.751 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:53 vm11 bash[49856]: debug 2026-03-09T14:44:53.397+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:26.036144+0000 front 2026-03-09T14:44:26.036334+0000 (oldest deadline 2026-03-09T14:44:49.536070+0000) 2026-03-09T14:44:54.250 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:53 vm11 bash[51863]: debug 2026-03-09T14:44:53.941+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:54.250 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:53 vm11 bash[51863]: debug 2026-03-09T14:44:53.941+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:27.880391+0000 front 2026-03-09T14:44:27.880415+0000 (oldest deadline 2026-03-09T14:44:52.579975+0000) 2026-03-09T14:44:54.750 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:54 vm11 bash[49856]: debug 2026-03-09T14:44:54.401+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.936294+0000 front 2026-03-09T14:44:21.936000+0000 (oldest deadline 2026-03-09T14:44:46.035856+0000) 2026-03-09T14:44:54.751 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:54 vm11 bash[49856]: debug 2026-03-09T14:44:54.401+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:26.036144+0000 front 2026-03-09T14:44:26.036334+0000 (oldest deadline 2026-03-09T14:44:49.536070+0000) 2026-03-09T14:44:55.250 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:54 vm11 bash[51863]: debug 2026-03-09T14:44:54.953+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:55.250 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:54 vm11 bash[51863]: debug 2026-03-09T14:44:54.953+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:27.880391+0000 front 2026-03-09T14:44:27.880415+0000 (oldest deadline 2026-03-09T14:44:52.579975+0000) 2026-03-09T14:44:55.750 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:55 vm11 bash[49856]: debug 2026-03-09T14:44:55.357+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.936294+0000 front 2026-03-09T14:44:21.936000+0000 (oldest deadline 2026-03-09T14:44:46.035856+0000) 2026-03-09T14:44:55.750 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:55 vm11 bash[49856]: debug 2026-03-09T14:44:55.357+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:26.036144+0000 front 2026-03-09T14:44:26.036334+0000 (oldest deadline 2026-03-09T14:44:49.536070+0000) 2026-03-09T14:44:55.750 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:55 vm11 bash[49856]: debug 2026-03-09T14:44:55.357+0000 7f9f36747640 -1 osd.6 139 heartbeat_check: no reply from 192.168.123.107:6822 osd.2 since back 2026-03-09T14:44:29.536782+0000 front 2026-03-09T14:44:29.536749+0000 (oldest deadline 2026-03-09T14:44:54.836263+0000) 2026-03-09T14:44:56.125 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 09 14:44:55 vm11 bash[62962]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-6 2026-03-09T14:44:56.126 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:55 vm11 bash[51863]: debug 2026-03-09T14:44:55.941+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:56.126 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:55 vm11 bash[51863]: debug 2026-03-09T14:44:55.941+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:27.880391+0000 front 2026-03-09T14:44:27.880415+0000 (oldest deadline 2026-03-09T14:44:52.579975+0000) 2026-03-09T14:44:56.165 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.6.service' 2026-03-09T14:44:56.175 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:44:56.175 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-09T14:44:56.175 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-09T14:44:56.175 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.7 2026-03-09T14:44:56.501 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:56 vm11 systemd[1]: Stopping Ceph osd.7 for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:44:56.501 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:56 vm11 bash[51863]: debug 2026-03-09T14:44:56.261+0000 7f9c1f474640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T14:44:56.501 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:56 vm11 bash[51863]: debug 2026-03-09T14:44:56.261+0000 7f9c1f474640 -1 osd.7 139 *** Got signal Terminated *** 2026-03-09T14:44:56.501 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:56 vm11 bash[51863]: debug 2026-03-09T14:44:56.261+0000 7f9c1f474640 -1 osd.7 139 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:44:57.250 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:56 vm11 bash[51863]: debug 2026-03-09T14:44:56.989+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:57.250 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:56 vm11 bash[51863]: debug 2026-03-09T14:44:56.989+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:27.880391+0000 front 2026-03-09T14:44:27.880415+0000 (oldest deadline 2026-03-09T14:44:52.579975+0000) 2026-03-09T14:44:58.501 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:58 vm11 bash[51863]: debug 2026-03-09T14:44:58.029+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:58.501 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:58 vm11 bash[51863]: debug 2026-03-09T14:44:58.029+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:27.880391+0000 front 2026-03-09T14:44:27.880415+0000 (oldest deadline 2026-03-09T14:44:52.579975+0000) 2026-03-09T14:44:58.502 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:58 vm11 bash[51863]: debug 2026-03-09T14:44:58.029+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6822 osd.2 since back 2026-03-09T14:44:32.580432+0000 front 2026-03-09T14:44:32.580350+0000 (oldest deadline 2026-03-09T14:44:57.280149+0000) 2026-03-09T14:44:59.500 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:59 vm11 bash[51863]: debug 2026-03-09T14:44:59.017+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:44:59.501 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:59 vm11 bash[51863]: debug 2026-03-09T14:44:59.017+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:27.880391+0000 front 2026-03-09T14:44:27.880415+0000 (oldest deadline 2026-03-09T14:44:52.579975+0000) 2026-03-09T14:44:59.501 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:44:59 vm11 bash[51863]: debug 2026-03-09T14:44:59.017+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6822 osd.2 since back 2026-03-09T14:44:32.580432+0000 front 2026-03-09T14:44:32.580350+0000 (oldest deadline 2026-03-09T14:44:57.280149+0000) 2026-03-09T14:45:00.500 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:45:00 vm11 bash[51863]: debug 2026-03-09T14:45:00.010+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:45:00.500 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:45:00 vm11 bash[51863]: debug 2026-03-09T14:45:00.010+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:27.880391+0000 front 2026-03-09T14:44:27.880415+0000 (oldest deadline 2026-03-09T14:44:52.579975+0000) 2026-03-09T14:45:00.500 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:45:00 vm11 bash[51863]: debug 2026-03-09T14:45:00.010+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6822 osd.2 since back 2026-03-09T14:44:32.580432+0000 front 2026-03-09T14:44:32.580350+0000 (oldest deadline 2026-03-09T14:44:57.280149+0000) 2026-03-09T14:45:01.294 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:45:01 vm11 bash[51863]: debug 2026-03-09T14:45:01.010+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6806 osd.0 since back 2026-03-09T14:44:21.479965+0000 front 2026-03-09T14:44:21.479975+0000 (oldest deadline 2026-03-09T14:44:46.179599+0000) 2026-03-09T14:45:01.294 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:45:01 vm11 bash[51863]: debug 2026-03-09T14:45:01.010+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6814 osd.1 since back 2026-03-09T14:44:27.880391+0000 front 2026-03-09T14:44:27.880415+0000 (oldest deadline 2026-03-09T14:44:52.579975+0000) 2026-03-09T14:45:01.294 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:45:01 vm11 bash[51863]: debug 2026-03-09T14:45:01.010+0000 7f9c1b28c640 -1 osd.7 139 heartbeat_check: no reply from 192.168.123.107:6822 osd.2 since back 2026-03-09T14:44:32.580432+0000 front 2026-03-09T14:44:32.580350+0000 (oldest deadline 2026-03-09T14:44:57.280149+0000) 2026-03-09T14:45:01.584 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 09 14:45:01 vm11 bash[63141]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-osd-7 2026-03-09T14:45:01.628 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@osd.7.service' 2026-03-09T14:45:01.638 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:45:01.638 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-09T14:45:01.638 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-09T14:45:01.638 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@prometheus.a 2026-03-09T14:45:01.771 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@prometheus.a.service' 2026-03-09T14:45:01.781 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:45:01.781 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-09T14:45:01.781 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 --force --keep-logs 2026-03-09T14:45:04.656 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:45:04 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:04.656 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:45:04 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:05.016 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:45:04 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:05.016 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:45:04 vm07 systemd[1]: Stopping Ceph alertmanager.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:45:05.016 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:45:04 vm07 bash[51060]: ts=2026-03-09T14:45:04.815Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T14:45:05.016 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:45:04 vm07 bash[80863]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-alertmanager-a 2026-03-09T14:45:05.016 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:45:04 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@alertmanager.a.service: Deactivated successfully. 2026-03-09T14:45:05.016 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:45:04 vm07 systemd[1]: Stopped Ceph alertmanager.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:45:05.016 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:45:04 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:05.405 INFO:journalctl@ceph.alertmanager.a.vm07.stdout:Mar 09 14:45:05 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:05.406 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:45:05 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:15.378 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:45:15 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:15.636 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:45:15 vm07 systemd[1]: Stopping Ceph node-exporter.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:45:15.636 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:45:15 vm07 bash[81109]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-node-exporter-a 2026-03-09T14:45:15.637 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:45:15 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-09T14:45:15.637 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:45:15 vm07 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-09T14:45:15.637 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:45:15 vm07 systemd[1]: Stopped Ceph node-exporter.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:45:15.906 INFO:journalctl@ceph.node-exporter.a.vm07.stdout:Mar 09 14:45:15 vm07 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:37.455 INFO:teuthology.orchestra.run.vm07.stderr:Traceback (most recent call last): 2026-03-09T14:45:37.455 INFO:teuthology.orchestra.run.vm07.stderr: File "/home/ubuntu/cephtest/cephadm", line 8634, in 2026-03-09T14:45:37.455 INFO:teuthology.orchestra.run.vm07.stderr: main() 2026-03-09T14:45:37.455 INFO:teuthology.orchestra.run.vm07.stderr: File "/home/ubuntu/cephtest/cephadm", line 8622, in main 2026-03-09T14:45:37.456 INFO:teuthology.orchestra.run.vm07.stderr: r = ctx.func(ctx) 2026-03-09T14:45:37.456 INFO:teuthology.orchestra.run.vm07.stderr: File "/home/ubuntu/cephtest/cephadm", line 6538, in command_rm_cluster 2026-03-09T14:45:37.456 INFO:teuthology.orchestra.run.vm07.stderr: with open(files[0]) as f: 2026-03-09T14:45:37.456 INFO:teuthology.orchestra.run.vm07.stderr:IsADirectoryError: [Errno 21] Is a directory: '/etc/ceph/ceph.conf' 2026-03-09T14:45:37.469 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:45:37.469 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 --force --keep-logs 2026-03-09T14:45:40.389 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:45:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:40.389 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:45:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:40.648 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:45:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:40.649 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:45:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:40.649 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:45:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:40.649 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:45:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:41.000 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:45:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:41.000 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:45:40 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:51.124 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:45:51 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:45:51.125 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:45:51 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:46:01.415 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:46:01 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:46:01.415 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:46:01.685 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:46:01.685 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 systemd[1]: Stopping Ceph grafana.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:46:01.685 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:46:01 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:46:01.972 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:46:01 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:46:01.973 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 bash[59245]: logger=server t=2026-03-09T14:46:01.686333786Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-09T14:46:01.973 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 bash[59245]: logger=tracing t=2026-03-09T14:46:01.686380554Z level=info msg="Closing tracing" 2026-03-09T14:46:01.973 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 bash[59245]: logger=grafana-apiserver t=2026-03-09T14:46:01.686769235Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-09T14:46:01.973 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 bash[59245]: logger=ticker t=2026-03-09T14:46:01.686789092Z level=info msg=stopped last_tick=2026-03-09T14:46:00Z 2026-03-09T14:46:01.973 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 bash[63850]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-grafana-a 2026-03-09T14:46:01.973 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@grafana.a.service: Deactivated successfully. 2026-03-09T14:46:01.973 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 systemd[1]: Stopped Ceph grafana.a for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:46:01.973 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 09 14:46:01 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:46:02.224 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:46:02 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:46:02.224 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:46:02 vm11 systemd[1]: Stopping Ceph node-exporter.b for f59f9828-1bc3-11f1-bfd8-7b3d0c866040... 2026-03-09T14:46:02.486 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:46:02 vm11 bash[64000]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040-node-exporter-b 2026-03-09T14:46:02.486 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:46:02 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-09T14:46:02.486 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:46:02 vm11 systemd[1]: ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-09T14:46:02.486 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:46:02 vm11 systemd[1]: Stopped Ceph node-exporter.b for f59f9828-1bc3-11f1-bfd8-7b3d0c866040. 2026-03-09T14:46:02.486 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 09 14:46:02 vm11 systemd[1]: /etc/systemd/system/ceph-f59f9828-1bc3-11f1-bfd8-7b3d0c866040@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T14:46:02.856 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:46:02.863 INFO:teuthology.orchestra.run.vm07.stderr:rm: cannot remove '/etc/ceph/ceph.conf': Is a directory 2026-03-09T14:46:02.863 INFO:teuthology.orchestra.run.vm07.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-09T14:46:02.863 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:46:02.864 DEBUG:teuthology.orchestra.run.vm11:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:46:02.870 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T14:46:02.870 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/507/remote/vm07/crash 2026-03-09T14:46:02.871 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/crash -- . 2026-03-09T14:46:02.913 INFO:teuthology.orchestra.run.vm07.stderr:tar: /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/crash: Cannot open: No such file or directory 2026-03-09T14:46:02.919 INFO:teuthology.orchestra.run.vm07.stderr:tar: Error is not recoverable: exiting now 2026-03-09T14:46:02.921 DEBUG:teuthology.misc:Transferring archived files from vm11:/var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/507/remote/vm11/crash 2026-03-09T14:46:02.921 DEBUG:teuthology.orchestra.run.vm11:> sudo tar c -f - -C /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/crash -- . 2026-03-09T14:46:02.929 INFO:teuthology.orchestra.run.vm11.stderr:tar: /var/lib/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/crash: Cannot open: No such file or directory 2026-03-09T14:46:02.929 INFO:teuthology.orchestra.run.vm11.stderr:tar: Error is not recoverable: exiting now 2026-03-09T14:46:02.929 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T14:46:02.929 DEBUG:teuthology.orchestra.run.vm07:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_STRAY_DAEMON | egrep -v CEPHADM_FAILED_DAEMON | egrep -v CEPHADM_AGENT_DOWN | head -n 1 2026-03-09T14:46:02.973 INFO:tasks.cephadm:Compressing logs... 2026-03-09T14:46:02.989 DEBUG:teuthology.orchestra.run.vm07:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:46:03.014 DEBUG:teuthology.orchestra.run.vm11:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:46:03.023 INFO:teuthology.orchestra.run.vm07.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T14:46:03.023 INFO:teuthology.orchestra.run.vm07.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T14:46:03.023 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T14:46:03.023 INFO:teuthology.orchestra.run.vm11.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T14:46:03.023 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.3.log 2026-03-09T14:46:03.023 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mgr.x.log 2026-03-09T14:46:03.024 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.log 2026-03-09T14:46:03.025 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.log 2026-03-09T14:46:03.028 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.3.log: 90.2% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T14:46:03.028 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mon.c.log 2026-03-09T14:46:03.031 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.log: 92.9% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.log.gz 2026-03-09T14:46:03.031 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.1.log 2026-03-09T14:46:03.037 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mgr.x.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.foo.vm11.ncyump.log 2026-03-09T14:46:03.038 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mon.b.log 2026-03-09T14:46:03.038 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.foo.vm11.ncyump.log: 76.5% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.foo.vm11.ncyump.log.gz 2026-03-09T14:46:03.038 INFO:teuthology.orchestra.run.vm11.stderr: 91.3% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T14:46:03.040 INFO:teuthology.orchestra.run.vm11.stderr: 87.2% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.log.gz 2026-03-09T14:46:03.040 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.5.log 2026-03-09T14:46:03.042 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mgr.y.log 2026-03-09T14:46:03.049 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mon.b.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.7.log 2026-03-09T14:46:03.053 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.5.log: 90.0% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mgr.x.log.gz 2026-03-09T14:46:03.053 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.6.log 2026-03-09T14:46:03.061 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.7.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.smpl.vm11.ocxkef.log 2026-03-09T14:46:03.062 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.foo.vm07.urmgxb.log 2026-03-09T14:46:03.069 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.audit.log 2026-03-09T14:46:03.069 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.smpl.vm11.ocxkef.log: 75.8% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.smpl.vm11.ocxkef.log.gz 2026-03-09T14:46:03.073 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-volume.log 2026-03-09T14:46:03.078 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mon.a.log 2026-03-09T14:46:03.079 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.foo.vm07.urmgxb.log: 76.5% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.foo.vm07.urmgxb.log.gz 2026-03-09T14:46:03.081 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.audit.log: 90.7% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.audit.log.gz 2026-03-09T14:46:03.081 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.cephadm.log 2026-03-09T14:46:03.084 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.4.log 2026-03-09T14:46:03.087 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.cephadm.log: 83.0% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.cephadm.log.gz 2026-03-09T14:46:03.098 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.2.log 2026-03-09T14:46:03.114 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.audit.log 2026-03-09T14:46:03.118 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-volume.log 2026-03-09T14:46:03.122 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.cephadm.log 2026-03-09T14:46:03.125 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.4.log: 94.2% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-volume.log.gz 2026-03-09T14:46:03.134 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/tcmu-runner.log 2026-03-09T14:46:03.135 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.cephadm.log: 94.3% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.audit.log.gz 2026-03-09T14:46:03.136 INFO:teuthology.orchestra.run.vm07.stderr: 90.2% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph.cephadm.log.gz 2026-03-09T14:46:03.142 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.smpl.vm07.tkkeli.log 2026-03-09T14:46:03.146 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/tcmu-runner.log: 82.7% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/tcmu-runner.log.gz 2026-03-09T14:46:03.146 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.0.log 2026-03-09T14:46:03.150 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.smpl.vm07.tkkeli.log: 76.5% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-client.rgw.smpl.vm07.tkkeli.log.gz 2026-03-09T14:46:03.174 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.0.log: 94.2% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-volume.log.gz 2026-03-09T14:46:03.468 INFO:teuthology.orchestra.run.vm11.stderr: 92.6% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mon.b.log.gz 2026-03-09T14:46:03.543 INFO:teuthology.orchestra.run.vm07.stderr: 89.4% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mgr.y.log.gz 2026-03-09T14:46:03.561 INFO:teuthology.orchestra.run.vm07.stderr: 92.5% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mon.c.log.gz 2026-03-09T14:46:04.237 INFO:teuthology.orchestra.run.vm11.stderr: 93.8% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.6.log.gz 2026-03-09T14:46:04.291 INFO:teuthology.orchestra.run.vm07.stderr: 94.0% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.2.log.gz 2026-03-09T14:46:04.330 INFO:teuthology.orchestra.run.vm07.stderr: 91.3% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-mon.a.log.gz 2026-03-09T14:46:04.439 INFO:teuthology.orchestra.run.vm11.stderr: 93.9% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.5.log.gz 2026-03-09T14:46:04.519 INFO:teuthology.orchestra.run.vm11.stderr: 94.1% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.7.log.gz 2026-03-09T14:46:04.576 INFO:teuthology.orchestra.run.vm11.stderr: 94.0% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.4.log.gz 2026-03-09T14:46:04.577 INFO:teuthology.orchestra.run.vm11.stderr: 2026-03-09T14:46:04.577 INFO:teuthology.orchestra.run.vm11.stderr:real 0m1.560s 2026-03-09T14:46:04.577 INFO:teuthology.orchestra.run.vm11.stderr:user 0m2.885s 2026-03-09T14:46:04.577 INFO:teuthology.orchestra.run.vm11.stderr:sys 0m0.169s 2026-03-09T14:46:04.677 INFO:teuthology.orchestra.run.vm07.stderr: 93.9% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.0.log.gz 2026-03-09T14:46:04.709 INFO:teuthology.orchestra.run.vm07.stderr: 94.0% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.1.log.gz 2026-03-09T14:46:04.742 INFO:teuthology.orchestra.run.vm07.stderr: 94.0% -- replaced with /var/log/ceph/f59f9828-1bc3-11f1-bfd8-7b3d0c866040/ceph-osd.3.log.gz 2026-03-09T14:46:04.743 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-09T14:46:04.743 INFO:teuthology.orchestra.run.vm07.stderr:real 0m1.727s 2026-03-09T14:46:04.743 INFO:teuthology.orchestra.run.vm07.stderr:user 0m3.228s 2026-03-09T14:46:04.743 INFO:teuthology.orchestra.run.vm07.stderr:sys 0m0.181s 2026-03-09T14:46:04.743 INFO:tasks.cephadm:Archiving logs... 2026-03-09T14:46:04.743 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/507/remote/vm07/log 2026-03-09T14:46:04.743 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T14:46:04.960 DEBUG:teuthology.misc:Transferring archived files from vm11:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/507/remote/vm11/log 2026-03-09T14:46:04.960 DEBUG:teuthology.orchestra.run.vm11:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T14:46:05.062 INFO:tasks.cephadm:Removing cluster... 2026-03-09T14:46:05.062 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 --force 2026-03-09T14:46:05.693 INFO:teuthology.orchestra.run.vm07.stderr:Traceback (most recent call last): 2026-03-09T14:46:05.693 INFO:teuthology.orchestra.run.vm07.stderr: File "/home/ubuntu/cephtest/cephadm", line 8634, in 2026-03-09T14:46:05.693 INFO:teuthology.orchestra.run.vm07.stderr: main() 2026-03-09T14:46:05.693 INFO:teuthology.orchestra.run.vm07.stderr: File "/home/ubuntu/cephtest/cephadm", line 8622, in main 2026-03-09T14:46:05.693 INFO:teuthology.orchestra.run.vm07.stderr: r = ctx.func(ctx) 2026-03-09T14:46:05.693 INFO:teuthology.orchestra.run.vm07.stderr: File "/home/ubuntu/cephtest/cephadm", line 6538, in command_rm_cluster 2026-03-09T14:46:05.694 INFO:teuthology.orchestra.run.vm07.stderr: with open(files[0]) as f: 2026-03-09T14:46:05.694 INFO:teuthology.orchestra.run.vm07.stderr:IsADirectoryError: [Errno 21] Is a directory: '/etc/ceph/ceph.conf' 2026-03-09T14:46:05.708 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:46:05.708 INFO:tasks.cephadm:Teardown complete 2026-03-09T14:46:05.708 ERROR:teuthology.run_tasks:Manager failed: cephadm Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 2216, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 1845, in initialize_config yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 229, in download_cephadm _rm_cluster(ctx, cluster_name) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 383, in _rm_cluster remote.run(args=[ File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm07 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 --force' 2026-03-09T14:46:05.709 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T14:46:05.711 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T14:46:05.711 DEBUG:teuthology.orchestra.run.vm07:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:46:05.712 DEBUG:teuthology.orchestra.run.vm11:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:============================================================================== 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:+vps-fra9.orlean 195.145.119.188 2 u 34 64 377 29.493 -5.004 3.370 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:+ntp1.lwlcom.net .GPS. 1 u 26 64 377 30.798 -1.876 1.455 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:#vps-fra1.orlean 195.145.119.188 2 u 18 64 377 21.881 -4.343 1.321 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:#stratum2-2.NTP. 129.70.137.82 2 u 76 64 373 30.523 -3.835 3.107 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:+static.222.16.4 35.73.197.144 2 u 35 64 377 0.284 -4.552 1.241 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:+172-104-138-148 129.70.132.32 3 u 26 64 377 22.615 -7.302 2.318 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:-netcup02.therav 189.97.54.122 2 u 21 64 377 28.684 -8.351 1.934 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:+141.144.246.224 146.131.121.246 2 u 20 64 377 29.627 -3.615 3.753 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:+pve2.h4x-gamers 192.53.103.108 2 u 26 64 377 25.010 -6.417 2.410 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:*ntp2.rrze.uni-e .MBGh. 1 u 24 64 377 26.154 -4.772 1.546 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:#v22025082392863 129.69.253.1 2 u 25 64 377 28.245 -7.569 1.257 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:-185.125.190.57 194.121.207.249 2 u 56 64 377 35.273 -5.302 1.345 2026-03-09T14:46:05.967 INFO:teuthology.orchestra.run.vm11.stdout:+node-1.infogral 168.239.11.197 2 u 23 64 377 23.496 -4.819 1.264 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:#pve2.h4x-gamers 192.53.103.108 2 u 19 128 377 24.962 -9.218 1.907 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:-stratum2-2.NTP. 129.70.137.82 2 u 35 64 367 30.427 -9.241 1.956 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:*ntp2.rrze.uni-e .MBGh. 1 u 22 128 377 26.132 -7.877 1.751 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:+141.144.246.224 146.131.121.246 2 u 33 64 377 29.171 -8.070 1.190 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:-vps-fra9.orlean 195.145.119.188 2 u 28 64 377 26.747 -9.204 5.603 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:+static.222.16.4 35.73.197.144 2 u 36 64 377 0.357 -6.900 1.879 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:#netcup02.therav 189.97.54.122 2 u 26 128 377 28.243 -11.548 1.960 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:#butterfly.post- 124.216.164.14 2 u 24 128 377 28.714 -8.152 1.300 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:-ntp1.lwlcom.net .GPS. 1 u 32 64 377 30.930 -5.856 1.896 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:#node-1.infogral 168.239.11.197 2 u 23 128 377 23.523 -5.769 2.714 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:+stratum2-4.NTP. 129.70.137.82 2 u 32 64 377 30.277 -7.011 2.637 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:+185.125.190.58 145.238.80.80 2 u 60 64 377 32.056 -8.360 1.585 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:+vps-fra1.orlean 195.145.119.188 2 u 26 128 377 22.047 -7.155 1.606 2026-03-09T14:46:05.990 INFO:teuthology.orchestra.run.vm07.stdout:+172-104-138-148 129.70.132.32 3 u 19 128 377 22.687 -7.744 1.800 2026-03-09T14:46:05.990 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T14:46:05.993 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T14:46:05.993 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T14:46:05.995 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T14:46:05.997 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T14:46:05.998 INFO:teuthology.task.internal:Duration was 1308.218768 seconds 2026-03-09T14:46:05.999 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T14:46:06.001 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T14:46:06.001 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T14:46:06.002 DEBUG:teuthology.orchestra.run.vm11:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T14:46:06.035 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T14:46:06.035 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm07.local 2026-03-09T14:46:06.035 DEBUG:teuthology.orchestra.run.vm07:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T14:46:06.089 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm11.local 2026-03-09T14:46:06.089 DEBUG:teuthology.orchestra.run.vm11:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T14:46:06.099 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T14:46:06.099 DEBUG:teuthology.orchestra.run.vm07:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:46:06.131 DEBUG:teuthology.orchestra.run.vm11:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:46:06.232 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T14:46:06.233 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:46:06.233 DEBUG:teuthology.orchestra.run.vm11:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:46:06.240 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:46:06.240 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:46:06.240 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T14:46:06.240 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:46:06.240 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:46:06.240 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:46:06.241 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T14:46:06.241 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:46:06.241 INFO:teuthology.orchestra.run.vm11.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T14:46:06.241 INFO:teuthology.orchestra.run.vm11.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T14:46:06.258 INFO:teuthology.orchestra.run.vm11.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 88.9% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T14:46:06.262 INFO:teuthology.orchestra.run.vm07.stderr: 91.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T14:46:06.263 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T14:46:06.272 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T14:46:06.272 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T14:46:06.315 DEBUG:teuthology.orchestra.run.vm11:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T14:46:06.322 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T14:46:06.326 DEBUG:teuthology.orchestra.run.vm07:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:46:06.359 DEBUG:teuthology.orchestra.run.vm11:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:46:06.367 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = core 2026-03-09T14:46:06.372 INFO:teuthology.orchestra.run.vm11.stdout:kernel.core_pattern = core 2026-03-09T14:46:06.381 DEBUG:teuthology.orchestra.run.vm07:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:46:06.421 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:46:06.421 DEBUG:teuthology.orchestra.run.vm11:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:46:06.427 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:46:06.427 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T14:46:06.429 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T14:46:06.430 DEBUG:teuthology.misc:Transferring archived files from vm07:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/507/remote/vm07 2026-03-09T14:46:06.430 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T14:46:06.470 DEBUG:teuthology.misc:Transferring archived files from vm11:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/507/remote/vm11 2026-03-09T14:46:06.470 DEBUG:teuthology.orchestra.run.vm11:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T14:46:06.478 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T14:46:06.478 DEBUG:teuthology.orchestra.run.vm07:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T14:46:06.511 DEBUG:teuthology.orchestra.run.vm11:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T14:46:06.523 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T14:46:06.526 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T14:46:06.526 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T14:46:06.528 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T14:46:06.528 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T14:46:06.555 DEBUG:teuthology.orchestra.run.vm11:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T14:46:06.557 INFO:teuthology.orchestra.run.vm07.stdout: 258078 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 14:46 /home/ubuntu/cephtest 2026-03-09T14:46:06.557 INFO:teuthology.orchestra.run.vm07.stdout: 258199 316 -rwxrwxr-x 1 ubuntu ubuntu 320521 Mar 9 14:26 /home/ubuntu/cephtest/cephadm 2026-03-09T14:46:06.558 INFO:teuthology.orchestra.run.vm07.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-09T14:46:06.565 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:46:06.565 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 48, in base yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 2216, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 1845, in initialize_config yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 229, in download_cephadm _rm_cluster(ctx, cluster_name) File "/home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks/cephadm.py", line 383, in _rm_cluster remote.run(args=[ File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm07 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 --force' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm07 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-09T14:46:06.566 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T14:46:06.568 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm07 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 --force' 2026-03-09T14:46:06.569 INFO:teuthology.run:Summary data: description: orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} duration: 1308.2187683582306 failure_reason: 'Command failed on vm07 with status 1: ''sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f59f9828-1bc3-11f1-bfd8-7b3d0c866040 --force''' owner: kyr status: fail success: false 2026-03-09T14:46:06.569 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T14:46:06.570 INFO:teuthology.orchestra.run.vm11.stdout: 258078 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 14:46 /home/ubuntu/cephtest 2026-03-09T14:46:06.570 INFO:teuthology.orchestra.run.vm11.stdout: 258199 316 -rwxrwxr-x 1 ubuntu ubuntu 320521 Mar 9 14:26 /home/ubuntu/cephtest/cephadm 2026-03-09T14:46:06.570 INFO:teuthology.orchestra.run.vm11.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-09T14:46:06.588 INFO:teuthology.run:FAIL